Why bias is the biggest threat to AI development

Panel of artificial intelligence luminaries debate the human and data biases that could turn the opportunities of machine learning into a nightmare for business and human beings

From left: Dr Peter Norvig, Dr Richard Socher, Dr Suchi Saria, and Adam Lashinsky from Fortune magazine
From left: Dr Peter Norvig, Dr Richard Socher, Dr Suchi Saria, and Adam Lashinsky from Fortune magazine

Bias – both human and data-based – is the biggest ethical challenge facing the development and adoption of artificial intelligence, according to a panel of world-leading AI luminaries.  

Speaking at last week’s Dreamforce conference, Salesforce chief scientist and adjunct professor of Stanford’s computer science department, Dr Richard Socher, said the rapid development of AI will inevitably impact more and more people’s lives, raising significant ethical concerns.

“These algorithms can change elections for the worse, or spread misinformation,” he told attendees. “In some benign natural language processing classification algorithms, for example, you may want to maximise the number of clicks, and find something with a terminator image has more clicks so you put more of those pictures in articles.”  

But it is the bias coming through existing datasets being used to train AI algorithms that arguably presents the biggest ethical problem facing industries.

“All of our algorithms are only as good as a trained data we give it,” Dr Socher said. “If your training data has certain biases against gender, age or sexual orientation, it will get picked up.

“Say you are a bank and want to build a loan application classifier to grants loan to new founders of businesses or not. It turns out that in the past, only 5 per cent of approved applications were given out to female founders. The algorithm will pick that up and say it’s a bad to be a female founders, we shouldn’t give out approval. That’s a bad thing to do but it is the past of your dataset.

“Biases have existed; humans have bias. Now that humans have created those data sets, algorithms will have them and potentially amplify them and make them worse.”

Dr Socher noted educational institutions such as Berkeley, as well as tech innovators like Google, have already started investigating how to get rid of bias on the algorithmic side. “But it’s your data set side you have to carefully think about,” he said.

As an example, Dr Socher said his own team had wanted to build an ‘emotion classification’ algorithm so that if an individual entered a physical space, it could identify if they are happy, sad, surprised or grumpy.

“I immediately said we cannot ship this until we look at all protected classes and make sure all old people are not classified as grumpy, for instance, because we only have two stock images of old people showing them happy,” he said. “We have to have some empathy in some ways with how those will be deployed and think carefully about the training data we give it.”

For assistant professor of Stanford University’s Machine Learning and Data Intensive Computing Group and Computational Biology and Medicine Group, Dr Suchi Saria, the rapid industrialisation of machine learning has not only accelerated AI development, it’s brought fresh ethical concerns with it. She saw the challenge as education, rather than technological innovation, but again, one that links back to bias.

“Up until three years ago, we were a small group of AI experts, and we were still kids with toys: No one was monitoring or bugging us, we were coming up with new ideas, and it was all good,” she said. “Things worked and didn’t work.

“Now, we’re in a new place where the tools we are developing and releasing open source, are getting opened up to lots of people. They’re experimenting, putting it in their own world experiences and taking it very seriously sometimes in ways they shouldn’t.”

An example Saria pointed to was using image recognition to try and predict if someone is going to become a criminal or not.  

“The science behind that doesn’t make any sense,” she said. “You can take a large database, then train something in that database and annotate it to provide some supervision, but it’s just mimicking what it has already seen,” she explained. “It’s not doing any causal inference to understand what the mechanism is that makes you a criminal. All it’s doing is replicating behaviour.

“There are tools freely available, there’s engineering experience available to use these tools and they’re becoming easier to use in new domains. But the education on what is a valid or invalid use of these tools is drastically lagging. We suddenly see lots of interesting new applications, but every so often we see applications that are not right; they’re incorrect or biased. They have consequences but there’s no one to police them. As a group, we are used to talking to each other and figuring out what is the right thing to do, and being told when something doesn’t make sense. This industrialisation of AI has changed that.”

Google director of research and revered computer science academic, Dr Peter Norvig, called out for more transparency around how AI is being trained and used.

“With basic AI, we’re collecting data, which means we have to be good shepherds,” he continued. “This isn’t AI per se, but it goes along with it.

“And you always want to have AI embedded in a larger process from which there is a way to ‘escape’. AI shouldn’t be the final link; at some point you need to hit zero and get back to a human operator. These systems are not necessarily good a novel things. It has to be designed to get around that.”  

Norvig pointed out the emerging field of study around ‘AI safety’ needs to be firmly embedded into any AI application.

“We don’t have engineering safety as a separate field, we need it [AI safety] to be embedded everywhere so right from the start you’re aware of what could go wrong,” he said.  

“Software always has bugs and we have tools to try and eliminate that. AI must also use all the best practices that exist in social engineering. Sometimes, however, it’s being driven by academic researchers who aren’t across that software history. But we also have to come up with new tools too.”  

Whatever you do, never think of AI in isolation, Dr Socher added. “You’re always applying it to a specific skill or your business,” he said.

“If you think about what that X is, in most cases getting there means starting with your training data. Think about how to collect output from every business process so when you do bring in AI you have that edge against your competitors. And if you’re working with data scientists and other vendors, they will need a way to access your data and a way for you to get that out.”

Read more of our coverage on the impact of AI

  • Nadia Cameron travelled to Dreamforce as a guest of Salesforce.

Follow CMO on Twitter:@CMOAustralia, take part in the CMO conversation on LinkedIn: CMO ANZ, join us on Facebook: https://www.facebook.com/CMOAustralia, or check us out on Google+:google.com/+CmoAu

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments

Latest Videos

More Videos

I found decent information in your article. I am impressed with how nicely you described this subject, It is a gainful article for us. Th...

Daniel Hughes

What 1800 Flowers is doing to create a consistent customer communications experience

Read more

Extremely informative. One should definitely go through the blog in order to know different aspects of the Retail Business and retail Tec...

Sheetal Kamble

SAP retail chief: Why more retailers need to harness data differently

Read more

It's actually a nice and helpful piece of info. I am satisfied that you shared this helpful information with us. Please stay us informed ...

FIO Homes

How a brand facelift and content strategy turned real estate software, Rockend, around

Read more

I find this very strange. The Coles store i shop in still has Flouro lights? T though this would have been the 1st thing they would have ...

Brad

Coles launches new sustainability initiative

Read more

Well, the conversion can be increased by just using marketing, but in general if you are considering an example with Magento, then it is ...

Bob

How Remedy is using digital marketing and commerce to drive conversion

Read more

Blog Posts

Why conflict can be good for your brand

Conflict is essentially a clash. When between two people, it’s just about always a clash of views or opinions. And when it comes to this type of conflict, more than the misaligned views themselves, what we typically hate the most is our physiological response.

Kathy Benson

Chief client officer, Ipsos

Brand storytelling lessons from Singapore’s iconic Fullerton hotel

In early 2020, I had the pleasure of staying at the newly opened Fullerton Hotel in Sydney. It was on this trip I first became aware of the Fullerton’s commitment to brand storytelling.

Gabrielle Dolan

Business storytelling leader

You’re doing it wrong: Emotion doesn’t mean emotional

If you’ve been around advertising long enough, you’ve probably seen (or written) a slide which says: “They won’t remember what you say, they’ll remember how you made them feel.” But it’s wrong. Our understanding of how emotion is used in advertising has been ill informed and poorly applied.

Zac Martin

Senior planner, Ogilvy Melbourne

Sign in