Why 'explainability' around AI is gaining ground

Robotics expert, Dr Brian Ruttenberg, shares how the concept of explaining the way machine learning makes decisions is vital as industries adopt this emerging technology

The ability for artificially intelligent (AI) systems to generate recommendations quickly from vast amounts of data is seeing them applied in tasks ranging from customer scoring to detecting the presence of melanoma.

But the way in which these systems reach their decisions is not always clear – especially when they use machine learning techniques enabling to them to autonomously improve their accuracy over time.

The inability for humans to easily understand how a machine reaches a conclusion is in part a natural consequence of the reason why machines are asked to perform these calculations in the first place – that no human mind could practically complete the same task mathematically.

But when a human being is impacted by the decision of the AI, such as failing to be approved for credit, or being excluded from receiving certain offers, the desire to understand why is understandable. And the need to understand the processes behind the decisions becomes greater still when there is potential for the machine to be reaching conclusions based on faulty or biased data that will naturally disadvantage some groups within the community.

As the principal scientist at robotics company, NextDriod, Dr Brian Ruttenberg has dedicated his working life to this concept of ‘explainability’ in AI. Speaking ahead of his appearance at his appearance at the Institute of Analytics Professional Australia (IAPA) national conference, Ruttenberg says he was inspired to investigate the concept by the rapid adoption of AI in autonomous vehicles.

“With the advent of deep learning and more computational power available the AI systems are just becoming much more complex,” Ruttenberg tells CMO. “These AI decisions have social implications. Explainability is related to the notion of fairness and accountability and transparency.”

Ruttenberg’s interest in explainability has centred on autonomous vehicles, however, he says the adoption of AI in other fields is going to place pressure on those using it to better explain how they’re employing it and why they are confident of relying on its outcomes.

Ruttenberg says marketers have also become more interested in explainability due to the rise in use of AI in customer scoring, where an AI might make recommendations to a sales person that a person is a good target for a specific offer.

“So they can be told to go look at that person, but have no indication of what causes that person to be a good target so they can go back and improve whatever processes they have,” he says.

In the US, the desire for explainability has come into sharp focus in the financial services sector, where fair lending practices dictate that lenders cannot discriminate.

“If you come up with some fancy new algorithm to decide when to extend credit, you have to show that that is not biased or racist or something like that,” Ruttenberg says. “And if you do that and it is, you’ll get sued.”

But beyond the financial penalties, another substantial potential impact is reputational damage that can flow from the use of unexplainable AI systems demonstrating bias.

Google experienced this when it was realised the first generation of its visual recognition technology identified people of African descent as gorillas. Even now, results from an MIT Media Lab study by Joy Buolamwini show the accuracy of recognition of people of African descent is often poorer than for those of European descent.

Ruttenberg’s research is focused on the concept of causality, and is seeking to understand what actually causes a neural network to change its output when it receives changes to its input. His goal is to present a chain of usability from the input to the output int terms that people can understand.

“In a bunch of industries you need this capability for a variety of reasons, and if you don’t have there are going to be some serious problems of a legal and financial nature,” he says. “It is just becoming a requirement in a sense, and people who are developing complex AI systems need to be aware of it. You don’t want to have some newfangled AI system now that is not explainable.”

Follow CMO on Twitter: @CMOAustralia, take part in the CMO conversation on LinkedIn: CMO ANZ, join us on Facebook: https://www.facebook.com/CMOAustralia, or check us out on Google+: google.com/+CmoAu 

 

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments

Blog Posts

Building a human-curated brand

If the FANG (Facebook, Amazon, Netflix, Google) sector and their measured worth are the final argument for the successful 21st Century model, then they are beyond reproach. Fine-tuning masses of algorithms to reduce human touchpoints and deliver wild returns to investors—all with workforces infinitesimally small compared to the giants of the 20th Century—has been proven out.

Will Smith

Co-founder and head of new markets, The Plum Guide

Sustainability trends brands can expect in 2020

​Marketers have made strides this year in sustainability with the number of brands rallying behind the Not Business As Usual alliance for action against climate change being a sign of the times. While sustainability efforts have gained momentum this year, 2020 is shaping up to be the year brands are really held accountable for their work in this area.

Ben King

CSR manager & sustainability expert, Finder

The trouble with Scotty from Marketing

As a Marketer, the ‘Scotty from Marketing’ meme troubles me.

Natalie Robinson

Director of marketing and communications, Melbourne Polytechnic

It's a pretty interesting article to read. I will learn more about this company later.

Dan Bullock

40 staff and 1000 contracts affected as foodora closes its Australian operations

Read more

If you think it can benefit both consumer and seller then it would be great

Simon Bird

Why Ford is counting on the Internet of Things to drive customer engagement

Read more

It's a good idea. Customers really should control their data. Now I understand why it's important.

Elvin Huntsberry

Salesforce CMO: Modern marketers have an obligation to give customers control of their data

Read more

Instagram changes algorithms every time you get used to them. It really pisses me off. What else pisses me off? The fact that Instagram d...

Nickwood

Instagram loses the like in Australia; industry reacts positively

Read more

I tried www.analisa.io to see my Instagram Insight

Dina Rahmawati

7 marketing technology predictions for 2016

Read more

Latest Podcast

More podcasts

Sign in