Why we need to look inside the black box of AI

How artificial intelligence engines make decisions is just one of a number of logistic and ethical reasons we need to be investigating systems more closely, says this Stanford University expert

In the late 18th century, Wolfgang von Kempelen amazed the world with his chess playing automaton, the Mechanical Turk, as it defeated worthy opponents including Napoleon Bonaparte and Benjamin Franklin.

Onlookers were amazed at how a device of gears and sprockets had been able to master the complexities of chess and demanded to know what was inside.

As it turned out, the chassis of the Mechanical Turk actually housed a human being – a simple yet highly-effective deceit made possible by the inability of onlookers to take a peek inside.

While the story of the Mechanical Turk is more than 100 years old, it may be a useful parable for modern society’s increasing use of artificial intelligence (AI), and how we are often relying on findings with little understanding of how they are reached.

The opacity of AI systems has significant potential for abuse, particularly through the long-term impact of intentional or even unintentional biases in the data. It is a topic that has also become prominent in thinking of many computer scientists and researchers, and given rise to the concept of Explainable AI.

One of the advocates for the potential of Explainable AI is Stanford University’s Executive Director of Strategic Research Initiatives in Computer Science, Dr Stephen Eglash.

Speaking to CMO ahead of his appearance at the Melbourne Business Analytics Conference on 19 July, Dr Eglash described how we have created AI systems that are effectively ‘black boxes’ whose results and decisions can be upsetting to those they impact when we have no idea of how those results are produced.

“If an AI system is helping a loan officer at a bank determine whether you are eligible for a loan, and it decides to reject your loan application, it’s reasonable to expect an explanation,” Dr Eglash says.

The loan applicant Dr Eglash describes may have no idea how the system came to reject their application – or whether it might have been working from data or an algorithm that was somehow prejudiced against the applicant.

“So explainable AI refers to a whole group of different approaches that people are taking to peel back the curtain on that and understand the inner workings of these AI systems,” Dr Eglash says.

One of these methods is to test AIs using different inputs to determine how it is making decisions. A simple example is an AI that has been designed to recognise images of cats. Recent research shows that by feeding that AI partial or doctored images it is possible to determine what parts of the image correspond to “cat” and the point at which the AI’s ability to recognise cats fails. Dr Eglash says this process might also make AIs more resilient, by uncovering the circumstances under which they might fail to make the best decisions.

Bringing in an ethical perspective

In his role at Stanford University Dr Eglash meets with as many as 200 companies and other organisations each year, and says he is pleased many of these conversations now include some reference to the ethics of emerging technology.

“They appear to believe that a company’s reputation for integrity matters,” Dr Eglash says. “And they increasingly seem to believe society’s values ought to become their corporate values as well and avoid going down the path of certain companies that have gotten in trouble for various sorts of bad behaviour or poor values.”

He points out self-interest can be a useful driver of these conclusions, such as through wanting to attract and retain younger workers or to send a positive message to customers.

“Whatever the motivations, I definitely have a sense from the companies that I meet with that we are hearing a lot more about humanitarian and social concerns,” Dr Eglash says.

Hence while the current era represents an incredibly exciting time for marketing and customer service, Dr Eglash says marketers shouldn’t leave their integrity at the front door when they arrive at work.

“It needs to become a part of who you are and a part of everything you and your company does,” Dr Eglash says. “Empathy goes a long way. Respect for the individual goes a long way. Just because you have the data doesn’t give you unlimited rights to do things with it, and as you judge what is appropriate and inappropriate, look at it not only through you own lens but also how it is going to appear to others.”

Follow CMO on Twitter: @CMOAustralia, take part in the CMO conversation on LinkedIn: CMO ANZ, join us on Facebook: https://www.facebook.com/CMOAustralia, or check us out on Google+:google.com/+CmoAu

 

 

 

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments

Blog Posts

Cannes Part 1: Why brands must put human interactions at the heart of their business

As a Media Juror at this year’s Cannes Lions, I was fortunate enough to attend the world’s most influential festival of creativity and listen to thought-leading marketers from around the globe.

Nickie Scriven

CEO, Zenith

4 creative skills that will be useful forever

In recent times, the clarion call from futurists, economists, marketers, educators and leaders the world over is one of slight panic, “The world is changing and you’re not ready for it!” And of course, they make a very good point.

Kieran Flanagan and Dan Gregory

Speakers, trainers, co-authors

Why defining brand strategy is vital to capitalising on quick wins

Big brands were once protected from small brands by high barriers to entry. Big brands had the resources to employ big agencies, to crack big ideas and to invest in big campaigns. They had the luxury of time to debate strategies and work on long-term innovation pipelines. Retailers used to partner with big brands.

Troy McKinnna

Co-founder, Agents of Spring, Calm & Stormy

Being an investor who has an understanding of the finance industry, I would question the validity of this article, judging by the impairm...

Rowan

How a customer-led digital transformation has helped this CMO generate $6m in incremental business

Read more

An interesting update considering that today is the easiest way it has ever been to measure contribution to the business as well as the h...

Frederic

State of the CMO 2019: Tenure shortens, pressure is on as marketers strive to demonstrate impact

Read more

I thought this was what Salesforce Audience Studio (formerly Salesforce DMP) was supposed to do. How are a CDP and a DMP different? I'm c...

Tony Ahn

Salesforce announces customer data platform

Read more

Well written Vanessa!! Agreed with your view that human experience is marketing's next frontier. Those businesses who are focused on the ...

Clyde Griffith

Forget customer experience, human experience is marketing's next frontier

Read more

Great tips for tops skills need to develop and stay competitive

Nick

The top skills needed to stay competitive in a rapidly changing workforce

Read more

Latest Podcast

More podcasts

Sign in