Why bias is the biggest threat to AI development

Panel of artificial intelligence luminaries debate the human and data biases that could turn the opportunities of machine learning into a nightmare for business and human beings

From left: Dr Peter Norvig, Dr Richard Socher, Dr Suchi Saria, and Adam Lashinsky from Fortune magazine
From left: Dr Peter Norvig, Dr Richard Socher, Dr Suchi Saria, and Adam Lashinsky from Fortune magazine

Bias – both human and data-based – is the biggest ethical challenge facing the development and adoption of artificial intelligence, according to a panel of world-leading AI luminaries.  

Speaking at last week’s Dreamforce conference, Salesforce chief scientist and adjunct professor of Stanford’s computer science department, Dr Richard Socher, said the rapid development of AI will inevitably impact more and more people’s lives, raising significant ethical concerns.

“These algorithms can change elections for the worse, or spread misinformation,” he told attendees. “In some benign natural language processing classification algorithms, for example, you may want to maximise the number of clicks, and find something with a terminator image has more clicks so you put more of those pictures in articles.”  

But it is the bias coming through existing datasets being used to train AI algorithms that arguably presents the biggest ethical problem facing industries.

“All of our algorithms are only as good as a trained data we give it,” Dr Socher said. “If your training data has certain biases against gender, age or sexual orientation, it will get picked up.

“Say you are a bank and want to build a loan application classifier to grants loan to new founders of businesses or not. It turns out that in the past, only 5 per cent of approved applications were given out to female founders. The algorithm will pick that up and say it’s a bad to be a female founders, we shouldn’t give out approval. That’s a bad thing to do but it is the past of your dataset.

“Biases have existed; humans have bias. Now that humans have created those data sets, algorithms will have them and potentially amplify them and make them worse.”

Dr Socher noted educational institutions such as Berkeley, as well as tech innovators like Google, have already started investigating how to get rid of bias on the algorithmic side. “But it’s your data set side you have to carefully think about,” he said.

As an example, Dr Socher said his own team had wanted to build an ‘emotion classification’ algorithm so that if an individual entered a physical space, it could identify if they are happy, sad, surprised or grumpy.

“I immediately said we cannot ship this until we look at all protected classes and make sure all old people are not classified as grumpy, for instance, because we only have two stock images of old people showing them happy,” he said. “We have to have some empathy in some ways with how those will be deployed and think carefully about the training data we give it.”

For assistant professor of Stanford University’s Machine Learning and Data Intensive Computing Group and Computational Biology and Medicine Group, Dr Suchi Saria, the rapid industrialisation of machine learning has not only accelerated AI development, it’s brought fresh ethical concerns with it. She saw the challenge as education, rather than technological innovation, but again, one that links back to bias.

“Up until three years ago, we were a small group of AI experts, and we were still kids with toys: No one was monitoring or bugging us, we were coming up with new ideas, and it was all good,” she said. “Things worked and didn’t work.

“Now, we’re in a new place where the tools we are developing and releasing open source, are getting opened up to lots of people. They’re experimenting, putting it in their own world experiences and taking it very seriously sometimes in ways they shouldn’t.”

An example Saria pointed to was using image recognition to try and predict if someone is going to become a criminal or not.  

“The science behind that doesn’t make any sense,” she said. “You can take a large database, then train something in that database and annotate it to provide some supervision, but it’s just mimicking what it has already seen,” she explained. “It’s not doing any causal inference to understand what the mechanism is that makes you a criminal. All it’s doing is replicating behaviour.

“There are tools freely available, there’s engineering experience available to use these tools and they’re becoming easier to use in new domains. But the education on what is a valid or invalid use of these tools is drastically lagging. We suddenly see lots of interesting new applications, but every so often we see applications that are not right; they’re incorrect or biased. They have consequences but there’s no one to police them. As a group, we are used to talking to each other and figuring out what is the right thing to do, and being told when something doesn’t make sense. This industrialisation of AI has changed that.”

Google director of research and revered computer science academic, Dr Peter Norvig, called out for more transparency around how AI is being trained and used.

“With basic AI, we’re collecting data, which means we have to be good shepherds,” he continued. “This isn’t AI per se, but it goes along with it.

“And you always want to have AI embedded in a larger process from which there is a way to ‘escape’. AI shouldn’t be the final link; at some point you need to hit zero and get back to a human operator. These systems are not necessarily good a novel things. It has to be designed to get around that.”  

Norvig pointed out the emerging field of study around ‘AI safety’ needs to be firmly embedded into any AI application.

“We don’t have engineering safety as a separate field, we need it [AI safety] to be embedded everywhere so right from the start you’re aware of what could go wrong,” he said.  

“Software always has bugs and we have tools to try and eliminate that. AI must also use all the best practices that exist in social engineering. Sometimes, however, it’s being driven by academic researchers who aren’t across that software history. But we also have to come up with new tools too.”  

Whatever you do, never think of AI in isolation, Dr Socher added. “You’re always applying it to a specific skill or your business,” he said.

“If you think about what that X is, in most cases getting there means starting with your training data. Think about how to collect output from every business process so when you do bring in AI you have that edge against your competitors. And if you’re working with data scientists and other vendors, they will need a way to access your data and a way for you to get that out.”

Read more of our coverage on the impact of AI

  • Nadia Cameron travelled to Dreamforce as a guest of Salesforce.

Follow CMO on Twitter:@CMOAustralia, take part in the CMO conversation on LinkedIn: CMO ANZ, join us on Facebook: https://www.facebook.com/CMOAustralia, or check us out on Google+:google.com/+CmoAu

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments

Latest Videos

More Videos

More Brand Posts

What are Chris Riddell's qualifications to talk about technology? What are the awards that Chris Riddell has won? I cannot seem to find ...

Tareq

Digital disruption isn’t disruption anymore: Why it’s time to refocus your business

Read more

Enterprisetalk

Mark

CMO's top 10 martech stories for the week - 9 June

Read more

Great e-commerce article!

Vadim Frost

CMO’s State of CX Leadership 2022 report finds the CX striving to align to business outcomes

Read more

Are you searching something related to Lottery and Lottery App then Agnito Technologies can be a help for you Agnito comes out as a true ...

jackson13

The Lottery Office CEO details journey into next-gen cross-channel campaign orchestration

Read more

Thorough testing and quality assurance are required for a bug-free Lottery Platform. I'm looking forward to dependability.

Ella Hall

The Lottery Office CEO details journey into next-gen cross-channel campaign orchestration

Read more

Blog Posts

Marketing prowess versus the enigma of the metaverse

Flash back to the classic film, Willy Wonka and the Chocolate Factory. Television-obsessed Mike insists on becoming the first person to be ‘sent by Wonkavision’, dematerialising on one end, pixel by pixel, and materialising in another space. His cinematic dreams are realised thanks to rash decisions as he is shrunken down to fit the digital universe, followed by a trip to the taffy puller to return to normal size.

Liz Miller

VP, Constellation Research

Why Excellent Leadership Begins with Vertical Growth

Why is it there is no shortage of leadership development materials, yet outstanding leadership is so rare? Despite having access to so many leadership principles, tools, systems and processes, why is it so hard to develop and improve as a leader?

Michael Bunting

Author, leadership expert

More than money talks in sports sponsorship

As a nation united by sport, brands are beginning to learn money alone won’t talk without aligned values and action. If recent events with major leagues and their players have shown us anything, it’s the next generation of athletes are standing by what they believe in – and they won’t let their values be superseded by money.

Simone Waugh

Managing Director, Publicis Queensland

Sign in