Beware of AI inherent biases and take steps to correct them
- 02 October, 2018 12:25
Artificial intelligence (AI) is only as good as the training data it is fed, and we must be aware of in-built bias as we move into a future where AI becomes increasingly relied on, said experts at the Dreamforce conference last week.
Salesforce architect, ethical AI practice leader, Kathy Baxter, and chief scientist, Richard Socher, led a discussion on the ethical implications of AI, and stressed education for those who use AI is key so that inbuilt societal biases are not magnified by technology.
Socher said AI has been a field of application and research for a long time, but only in the last 5-10 years has it crossed a threshold where accuracy is so high, with computer vision and natural language processing (NLP), that humanity can now use it in lots of applications.
“The things we can do in one year now were impossible for entire teams of people over years not that long ago,” he said. “However, now it’s working, we do need to think about the ethical implications of its use. We just had breakthrough research results a single model that solves 10 different NLP casting everything inside NLP, but generally for AI, the algorithms are all quite different.”
In June this year, Salesforce announced the Natural Language Decathlon (decaNLP), which makes it possible for a single model to tackle 10 different NLP tasks all at once and eliminates the need to build and train individual models for each problem. It spans question answering, machine translation, summarisation, natural language inference, sentiment analysis, semantic role labelling, relation extraction, goal-oriented dialogue, database query generation, and pronoun resolution. Until now, traditional approaches would require a hyper-customised architecture for each task.
But this is as close as humanity is to developing a generalised AI algorithm. What this means is different AI algorithms solve different issues, and there is not one generalised AI that does multiple tasks. In fact, unlike popular belief, scientists are not even at the stage to be able to predict the steps required to achieve on ‘overall’ AI, or Artificial General Intelligence, he said.
“Artificial general intelligence is a while off, and is a distraction to the actual ethical implications of what we do have,” he said.
Baxter believed technology should be in service of the human, not the other way around, and said it’s important to remember as we rush to implement AI in service and marketing that it is a dual-use technology, meaning it can have both good and bad use cases.
“AI can do so much good, but it can have the potential to unknowingly case harm and this occurs mostly in the training data it is fed. We can’t expect AI to erase the bias that is baked into our society and the data of the systems we use to make these predictions and recommendations,” she said.
“There are many proxies in the data that can be used to discriminate, such as gender or ethnicity. We need to identify the proxies that can be used for bias and then clean up the training data to create recommendations and predictions that represent the world as we want, not the world as it is.”
Socher went on to explain that within AI, there are supervised and unsupervised algorithms, and the majority of the breakthroughs we have seen are through supervised machine learning algorithms.
“The training data maps a certain input to a certain output. When you train an AI to do this, it will use thousands of examples as it learns to do this on its own. If the data you use is biased, the AI will put up the bias and amplify it, or at least keep it going," he said. “We have to educate our clients that the AI is only as good as the training data you feed it.”
Baxter agreed, saying bias is error. "We want to ensure each person is given equal opportunity, and to do this we need to be scruitable to understand what is happening in the decision making process."
Both went on to discuss how AI in marketing will create more jobs, including different kinds of jobs, that we cannot even imagine 10 years from now.
“AI can help target people better and find the right opportunities so people don’t bothered by things they aren’t interested in. That can open up whole new kinds of jobs that don’t exist right now. It’s hard to predict the impacts technology like AI will have more than five years into the future,” Socher said.
Baxter said marketers should look at AI as a way not to decrease the workforce, but to make it more efficient, to create those moments of delight for customers.
“AI can turn people into super humans and maximise their potential and makes them feel more competent when they are dealing with customers," she said.
"But the quality of the training data really matters. At Salesforce, we help customer see and understand their data so they can identify bias and error and correct it for better outcomes for everybody.”
Socher said when implementing AI, it is vital to keep a ‘beginner’s mind’ because the technology is not perfect, just like humans are not perfect.
“For example, voice generation and speech recognition uses more male voices, so we know the default will work less well for women. What that means is AI is not perfect, it is not 100 per cent, and sometime data is genuinely ambiguous. As you’re rolling out AI make sure you have feedback loops so you can improve the AI over time.
“It’s important to keep a beginners’ mind. Ensure you have those feedback loops in your AI systems, making sure there’s a person there to escalate service concerns from a bot. In general, as any organisation rolls out AI, they have to be aware of the implications it can have.”
More of CMO's coverage of the power of AI:
- How Alibaba is using AI to power the future of business
- How artificial intelligence is transforming marketing
- Why AI is set to be a game-changer for creativity
- 8 things you need to know about AI in marketing right now
- Why we need to look inside the black box of AI
-Vanessa Mitchell travelled to Dreamforce as a guest of Salesforce.