Katja Forbes: AI is going to test your organisation’s trustworthiness

Design expert shares how artificial intelligence is evolving, why it's important to set customer and employee expectations as a marketer or IT leader, and the implications to your customer approach

Katja Forbes at the CIO-CMO Executive Connections event in Auckland
Katja Forbes at the CIO-CMO Executive Connections event in Auckland

Setting clear expectations and building trustworthiness are two vital ingredients every organisation should be paying close attention to as they start to transform their customer approach using artificial intelligence (AI).

That’s the view of design expert and industry thought leader, Katja Forbes, who took to the stage at CMO and CIO’s third-annual Executive Connections event in Auckland to share early lessons around how AI capability can be deployed by organisations, and the very human implications machines represent.

“AI is not a chatbot, platform or algorithm, it’s a set of capabilities and these might be something you never see, or something that’s presented to the end user,” the MD of DesignIT Australia and New Zealand told attendees.

“Which means we’re in a situation now where we as consumers have to trust invisibility and what is going on behind the scenes, and those decisions being made that we never see, are right and in our societal interests, or aren’t doing us any harm.

“Anything your organisation designs and puts out there as part of your product set is going to face these questions, too.”

What this means is companies are going to need to do a lot of work to build trustworthiness, Forbes said. “This is because they’re becoming more and more opaque in the way they’re using data, in what they’re doing with data, in how they’re making these decisions and delivering back to customers,” she continued.

“There is this tension you’re going to have to work with using these technologies, between being so beneficial and ensuring customers trust the way you’re using them and what you’re doing with their data.”  

One stumbling block is that AI is susceptible to bias. As Forbes pointed out, humans are teaching AI capabilities ethical standards and biases through the data sets they feed such systems to learn from.

“AI is learning from human culture. And in doing so, it’s perpetuating the stereotypes, biases and extremely bad behaviours,” Forbes said.

A classic example is Microsoft’s Tay, a chatbot on Twitter that within 24 hours became riddled with sexist, racist, Nazi biases. Another failed AI experiment Forbes pointed to was Amazon’s AI for resume screening. Having been fed 10 years’ worth of data about the types of candidates the company liked, which had often been men, the AI started screening out CVs with any reference to ‘women’ in it. Unable to fix it, Amazon pulled the bot.

However, on the flip side, launching a completely politically correct AI isn’t necessarily great for society either, Forbes argued. Microsoft’s next step after Tay for example, was to launch a sister product, Zo, an incredibly politically correct bot.

“If you hit any of her [Zo’s] triggers – Muslims, Jews, religion or high-profile American politician – she stops the conversation,” Forbes said. “This is an interesting situation because we now have AI censoring without any context. I don’t know if that is as bad or worse. This is AI now making a judgment call about what is right and correct to talk about.”

To try and help, Microsoft has launched principles for AI which Forbes encouraged all organisations to consider. These include fairness and striving to avoid perpetuating biases, reliability and safety, how individual data is being used transparency, accountability and inclusion.

In addition, for AI projects being focused on now, it’s vital to set expectations with end customers. This is because the first thing people want to establish when it comes to AI is whether or not there is minimum viable intelligence in the system.

“Firstly, we check if it’s going to respond to us,” Forbes explained, referencing research undertaken by Contact Scout on AI interaction. “Once you establish there is responsiveness in the system, the second thing people are going to do is check out whether it is competent or not. Can it do the thing you have set expectations around in terms of what it can do?”

Contact Scout’s research found the first 10 interactions with AI need to be flawless for a user in order to be accepted. “Because the third thing people tend to do with AI is try and break it… That testing generates or loses faith in your AI very quickly,” Forbes said.  

She pointed out the first question consumers asked ME Bank’s IBM Watson-based chatbot interface was: Would you marry me?

“People are trying to get underneath it and cause the issues. And if you can’t even use a voice assistant to set a reminder on your phone, you’re not going to trust it with your credit card details,” Forbes said.  

“A great example of an AI/ML that does what it says and explains the constraints around it is Babylon Health. It’s an interactive symptom checkup, which uses the deep learning system behind it to come up with recommendations on what you do. It explicitly states it doesn’t provide a diagnosis, but rather is an indication of what you should do. Throughout the site, at every point, it sets the expectation it’s created by doctors and scientists but it’s not a doctor.”    

Setting expectations around AI and what it can do is also crucial for marketing, digital and IT leaders when it comes to employees and executives. “You will disappoint them superfast if you set them up to believe it can do something it cannot do,” Forbes warned.  

Exacerbating the issue is what Forbes saw as misconceptions on the types of AI available to organisations today, and she highlighted three key forms of AI in her presentation. The first is ‘Narrow AI’, which has the capability to undertake 1-2 tasks within particular parameters, such as booking a table at a restaurant, or a self-driving car.

“The second is general AI, which has enough cognitive ability and understanding of an environment and can also process data that it comes to it at a speed far faster than humans. Much like C3PO in Star Wars, it understands circumstances, context and then process the odds accordingly. That doesn’t exist right now and isn’t available to us yet,” Forbes said.  

“Yet when we talk about AI, this is what a lot of people think we’re talking about.

“The third flavour of AI is this concept of ‘superhuman AI’. To compare this intelligence to a human would be like comparing human intelligence to an ant. Again, this doesn’t exist either.”  

As a result, the majority of AI work today is in in fact using machine learning and working within narrow constraints, Forbes said.

How to choose and run an AI project

So how can you successfully adopt AI and machine learning? As a checklist of practical steps for organisations starting to explore these projects, Forbes referenced key questions created by Andrew Ng, founder and CEO of LandingAI and former technical product owner of Google Brain.

The first: Will it give you a quick win? “You need to choose something that will give you an ability to show value in 6-12 months,” Forbes advised. “Also, is it too trivial or unwieldy? Is it too small, or big, and is it manageable?”

Creating something specific to your industry will also make it easier to your business to reinvest and put more money behind AI efforts. “For example, if you’re in health services and you’re looking at an AI that helps screen CVs, chances are someone out there has already done it,” Forbes said.

“Try something health specific, such as helping doctors triage and create treatment plans. That’s going to be much more successful.”

With AI talent so scarce globally, another key question is whether it makes sense to accelerate projects by getting partners to help standup and support AI projects. Forbes also asked: Are you creating value?

“What in the project is creating value for your organisation – is it creating efficiencies, allowing you to launch a new product? And how can you measure that value?”

As a final note, Forbes outlined the ‘Augmented Services Platform Canvas’ produced by Pontus Warnestal of Halmstad University and available via Creative Commons as a good tool.

“This places ethics and risk , impact and values and problems and consequences at the top, which is should,” Forbes added. “These are the most important places to start with an AI project.

“And the runway on this is extremely short. Those who have started; fantastic. Those looking to get started, I’d get that engine going pretty fast otherwise you’ll be left behind. ”

Join the newsletter!


Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments

Latest Videos

More Videos

Google collects as much data as it can about you. It would be foolish to believe Google cares about your privacy. I did cut off Google fr...

Phil Davis

ACCC launches fresh legal challenge against Google's consumer data practices for advertising

Read more

“This new logo has been noticed and it replaces a logo no one really knew existed so I’d say it’s abided by the ‘rule’ of brand equity - ...


Brand Australia misses the mark

Read more

IMHO a logo that needs to be explained really doesn't achieve it's purpose.I admit coming to the debate a little late, but has anyone els...


Brand Australia misses the mark

Read more

Hi everyone! Hope you are doing well. I just came across your website and I have to say that your work is really appreciative. Your conte...

Rochie Grey

Will 3D printing be good for retail?

Read more

Very insightful. Executive leaders can let middle managers decide on the best course of action for the business and once these plans are ...


CMOs: Let middle managers lead radical innovation

Read more

Blog Posts

The obvious reason Covidsafe failed to get majority takeup

Online identity is a hot topic as more consumers are waking up to how their data is being used. So what does the marketing industry need to do to avoid a complete loss of public trust, in instances such as the COVID-19 tracing app?

Dan Richardson

Head of data, Verizon Media

Brand or product placement?

CMOs are looking to ensure investment decisions in marketing initiatives are good value for money. Yet they are frustrated in understanding the value of product placements within this mix for a very simple reason: Product placements are broadly defined and as a result, mean very different things to different people.

Michael Neale and Dr David Corkindale

University of Adelaide Business School and University of South Australia

Why CMOs need a clear voice strategy to connect with their customers

Now more than ever, voice presents a clear opportunity to add value to an organisation in many ways. Where operational efficiencies are scrutinised, budgets are tighter and discretionary consumer spend at a low, engaging with an audience is difficult.

Guy Munro

Head of innovation and technology, Paper + Spark

Sign in