Why bias is the biggest threat to AI development

Panel of artificial intelligence luminaries debate the human and data biases that could turn the opportunities of machine learning into a nightmare for business and human beings

From left: Dr Peter Norvig, Dr Richard Socher, Dr Suchi Saria, and Adam Lashinsky from Fortune magazine
From left: Dr Peter Norvig, Dr Richard Socher, Dr Suchi Saria, and Adam Lashinsky from Fortune magazine

Bias – both human and data-based – is the biggest ethical challenge facing the development and adoption of artificial intelligence, according to a panel of world-leading AI luminaries.  

Speaking at last week’s Dreamforce conference, Salesforce chief scientist and adjunct professor of Stanford’s computer science department, Dr Richard Socher, said the rapid development of AI will inevitably impact more and more people’s lives, raising significant ethical concerns.

“These algorithms can change elections for the worse, or spread misinformation,” he told attendees. “In some benign natural language processing classification algorithms, for example, you may want to maximise the number of clicks, and find something with a terminator image has more clicks so you put more of those pictures in articles.”  

But it is the bias coming through existing datasets being used to train AI algorithms that arguably presents the biggest ethical problem facing industries.

“All of our algorithms are only as good as a trained data we give it,” Dr Socher said. “If your training data has certain biases against gender, age or sexual orientation, it will get picked up.

“Say you are a bank and want to build a loan application classifier to grants loan to new founders of businesses or not. It turns out that in the past, only 5 per cent of approved applications were given out to female founders. The algorithm will pick that up and say it’s a bad to be a female founders, we shouldn’t give out approval. That’s a bad thing to do but it is the past of your dataset.

“Biases have existed; humans have bias. Now that humans have created those data sets, algorithms will have them and potentially amplify them and make them worse.”

Dr Socher noted educational institutions such as Berkeley, as well as tech innovators like Google, have already started investigating how to get rid of bias on the algorithmic side. “But it’s your data set side you have to carefully think about,” he said.

As an example, Dr Socher said his own team had wanted to build an ‘emotion classification’ algorithm so that if an individual entered a physical space, it could identify if they are happy, sad, surprised or grumpy.

“I immediately said we cannot ship this until we look at all protected classes and make sure all old people are not classified as grumpy, for instance, because we only have two stock images of old people showing them happy,” he said. “We have to have some empathy in some ways with how those will be deployed and think carefully about the training data we give it.”

For assistant professor of Stanford University’s Machine Learning and Data Intensive Computing Group and Computational Biology and Medicine Group, Dr Suchi Saria, the rapid industrialisation of machine learning has not only accelerated AI development, it’s brought fresh ethical concerns with it. She saw the challenge as education, rather than technological innovation, but again, one that links back to bias.

“Up until three years ago, we were a small group of AI experts, and we were still kids with toys: No one was monitoring or bugging us, we were coming up with new ideas, and it was all good,” she said. “Things worked and didn’t work.

“Now, we’re in a new place where the tools we are developing and releasing open source, are getting opened up to lots of people. They’re experimenting, putting it in their own world experiences and taking it very seriously sometimes in ways they shouldn’t.”

An example Saria pointed to was using image recognition to try and predict if someone is going to become a criminal or not.  

“The science behind that doesn’t make any sense,” she said. “You can take a large database, then train something in that database and annotate it to provide some supervision, but it’s just mimicking what it has already seen,” she explained. “It’s not doing any causal inference to understand what the mechanism is that makes you a criminal. All it’s doing is replicating behaviour.

“There are tools freely available, there’s engineering experience available to use these tools and they’re becoming easier to use in new domains. But the education on what is a valid or invalid use of these tools is drastically lagging. We suddenly see lots of interesting new applications, but every so often we see applications that are not right; they’re incorrect or biased. They have consequences but there’s no one to police them. As a group, we are used to talking to each other and figuring out what is the right thing to do, and being told when something doesn’t make sense. This industrialisation of AI has changed that.”

Google director of research and revered computer science academic, Dr Peter Norvig, called out for more transparency around how AI is being trained and used.

“With basic AI, we’re collecting data, which means we have to be good shepherds,” he continued. “This isn’t AI per se, but it goes along with it.

“And you always want to have AI embedded in a larger process from which there is a way to ‘escape’. AI shouldn’t be the final link; at some point you need to hit zero and get back to a human operator. These systems are not necessarily good a novel things. It has to be designed to get around that.”  

Norvig pointed out the emerging field of study around ‘AI safety’ needs to be firmly embedded into any AI application.

“We don’t have engineering safety as a separate field, we need it [AI safety] to be embedded everywhere so right from the start you’re aware of what could go wrong,” he said.  

“Software always has bugs and we have tools to try and eliminate that. AI must also use all the best practices that exist in social engineering. Sometimes, however, it’s being driven by academic researchers who aren’t across that software history. But we also have to come up with new tools too.”  

Whatever you do, never think of AI in isolation, Dr Socher added. “You’re always applying it to a specific skill or your business,” he said.

“If you think about what that X is, in most cases getting there means starting with your training data. Think about how to collect output from every business process so when you do bring in AI you have that edge against your competitors. And if you’re working with data scientists and other vendors, they will need a way to access your data and a way for you to get that out.”

Read more of our coverage on the impact of AI

  • Nadia Cameron travelled to Dreamforce as a guest of Salesforce.

Follow CMO on Twitter:@CMOAustralia, take part in the CMO conversation on LinkedIn: CMO ANZ, join us on Facebook: https://www.facebook.com/CMOAustralia, or check us out on Google+:google.com/+CmoAu

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments

Latest Videos

Conversations over a cuppa with CMO: The Star's George Hughes

It's been an incredibly tough three months for the Star as it shut its doors and stood down staff in response to the COVID-19 lockdown. Yet innovation has shone through, and if the CMO, George Hughes, has anything to say about it, such lateral thinking will continue as we start to recover from the crisis.

More Videos

We would like to invite you to the Virtual Exhibition about IoT Trends in 2020, 7 - 9 July, organised by Must.We developed a new B2B matc...

hayfa

Want to master digital transformation? Stop thinking about your own problems

Read more

We have been trying to help our clients through this time. I think the important thing is clear communication! We also created some check...

Erin Payne

Is COVID-19 the right time for a positive marketing campaign?

Read more

Very good article Sagar. Congrats! It's exactly what we are doing at Dafiti and it's very important have you close to us into this journey.

Roosevelt Junior

Making Your Organisation Data-Driven [MYOD] - Data-driven marketing - CMO Australia

Read more

Corporates should be innovating to stay relevant and disrupt the market and collaborating with startups is easily the best way to go for ...

Diana

How your company can innovate its way through the COVID-19 crisis

Read more

Very welll written Sagar and gives an excellent overview for data strategy for an orgaanization!

Ritvik Dhupkar

Making Your Organisation Data-Driven [MYOD] - Data-driven marketing - CMO Australia

Read more

Blog Posts

Business quiet? Now is the time to review your owned assets

For businesses and advertiser categories currently experiencing a slowdown in consumer activity, now is the optimal time to get started on projects that have been of high importance, but low urgency.

Olia Krivtchoun

CX discipline leader, Spark Foundry

Bottoms up: Lockdown lessons for an inverted marketing world

The effects of the coronavirus slammed the brakes on retail sales in pubs, clubs and restaurants. Fever-Tree’s Australia GM Andy Gaunt explains what they have learnt from some tricky months of trading

Andy Gaunt

General manager, Fever-Tree Australia and New Zealand

Younger demos thought lost are now found: But what about the missing money?

There is much talk about what VOZ will bring to the media industry. New ways to slice and dice audiences and segments. A clearer understanding of screen consumption. Even new ways to plan and buy. The most interesting result could be finding something many thought we lost - younger viewers, specifically the valuable 18-39s.

Michael Stanford

Head of 10 Imagine and national creative director, Network 10

Sign in