Katja Forbes: AI is going to test your organisation’s trustworthiness

Design expert shares how artificial intelligence is evolving, why it's important to set customer and employee expectations as a marketer or IT leader, and the implications to your customer approach

Katja Forbes at the CIO-CMO Executive Connections event in Auckland
Katja Forbes at the CIO-CMO Executive Connections event in Auckland

Setting clear expectations and building trustworthiness are two vital ingredients every organisation should be paying close attention to as they start to transform their customer approach using artificial intelligence (AI).

That’s the view of design expert and industry thought leader, Katja Forbes, who took to the stage at CMO and CIO’s third-annual Executive Connections event in Auckland to share early lessons around how AI capability can be deployed by organisations, and the very human implications machines represent.

“AI is not a chatbot, platform or algorithm, it’s a set of capabilities and these might be something you never see, or something that’s presented to the end user,” the MD of DesignIT Australia and New Zealand told attendees.

“Which means we’re in a situation now where we as consumers have to trust invisibility and what is going on behind the scenes, and those decisions being made that we never see, are right and in our societal interests, or aren’t doing us any harm.

“Anything your organisation designs and puts out there as part of your product set is going to face these questions, too.”

What this means is companies are going to need to do a lot of work to build trustworthiness, Forbes said. “This is because they’re becoming more and more opaque in the way they’re using data, in what they’re doing with data, in how they’re making these decisions and delivering back to customers,” she continued.

“There is this tension you’re going to have to work with using these technologies, between being so beneficial and ensuring customers trust the way you’re using them and what you’re doing with their data.”  

One stumbling block is that AI is susceptible to bias. As Forbes pointed out, humans are teaching AI capabilities ethical standards and biases through the data sets they feed such systems to learn from.

“AI is learning from human culture. And in doing so, it’s perpetuating the stereotypes, biases and extremely bad behaviours,” Forbes said.

A classic example is Microsoft’s Tay, a chatbot on Twitter that within 24 hours became riddled with sexist, racist, Nazi biases. Another failed AI experiment Forbes pointed to was Amazon’s AI for resume screening. Having been fed 10 years’ worth of data about the types of candidates the company liked, which had often been men, the AI started screening out CVs with any reference to ‘women’ in it. Unable to fix it, Amazon pulled the bot.

However, on the flip side, launching a completely politically correct AI isn’t necessarily great for society either, Forbes argued. Microsoft’s next step after Tay for example, was to launch a sister product, Zo, an incredibly politically correct bot.

“If you hit any of her [Zo’s] triggers – Muslims, Jews, religion or high-profile American politician – she stops the conversation,” Forbes said. “This is an interesting situation because we now have AI censoring without any context. I don’t know if that is as bad or worse. This is AI now making a judgment call about what is right and correct to talk about.”

To try and help, Microsoft has launched principles for AI which Forbes encouraged all organisations to consider. These include fairness and striving to avoid perpetuating biases, reliability and safety, how individual data is being used transparency, accountability and inclusion.

In addition, for AI projects being focused on now, it’s vital to set expectations with end customers. This is because the first thing people want to establish when it comes to AI is whether or not there is minimum viable intelligence in the system.

“Firstly, we check if it’s going to respond to us,” Forbes explained, referencing research undertaken by Contact Scout on AI interaction. “Once you establish there is responsiveness in the system, the second thing people are going to do is check out whether it is competent or not. Can it do the thing you have set expectations around in terms of what it can do?”

Contact Scout’s research found the first 10 interactions with AI need to be flawless for a user in order to be accepted. “Because the third thing people tend to do with AI is try and break it… That testing generates or loses faith in your AI very quickly,” Forbes said.  

She pointed out the first question consumers asked ME Bank’s IBM Watson-based chatbot interface was: Would you marry me?

“People are trying to get underneath it and cause the issues. And if you can’t even use a voice assistant to set a reminder on your phone, you’re not going to trust it with your credit card details,” Forbes said.  

“A great example of an AI/ML that does what it says and explains the constraints around it is Babylon Health. It’s an interactive symptom checkup, which uses the deep learning system behind it to come up with recommendations on what you do. It explicitly states it doesn’t provide a diagnosis, but rather is an indication of what you should do. Throughout the site, at every point, it sets the expectation it’s created by doctors and scientists but it’s not a doctor.”    

Setting expectations around AI and what it can do is also crucial for marketing, digital and IT leaders when it comes to employees and executives. “You will disappoint them superfast if you set them up to believe it can do something it cannot do,” Forbes warned.  

Exacerbating the issue is what Forbes saw as misconceptions on the types of AI available to organisations today, and she highlighted three key forms of AI in her presentation. The first is ‘Narrow AI’, which has the capability to undertake 1-2 tasks within particular parameters, such as booking a table at a restaurant, or a self-driving car.

“The second is general AI, which has enough cognitive ability and understanding of an environment and can also process data that it comes to it at a speed far faster than humans. Much like C3PO in Star Wars, it understands circumstances, context and then process the odds accordingly. That doesn’t exist right now and isn’t available to us yet,” Forbes said.  

“Yet when we talk about AI, this is what a lot of people think we’re talking about.

“The third flavour of AI is this concept of ‘superhuman AI’. To compare this intelligence to a human would be like comparing human intelligence to an ant. Again, this doesn’t exist either.”  

As a result, the majority of AI work today is in in fact using machine learning and working within narrow constraints, Forbes said.

How to choose and run an AI project

So how can you successfully adopt AI and machine learning? As a checklist of practical steps for organisations starting to explore these projects, Forbes referenced key questions created by Andrew Ng, founder and CEO of LandingAI and former technical product owner of Google Brain.

The first: Will it give you a quick win? “You need to choose something that will give you an ability to show value in 6-12 months,” Forbes advised. “Also, is it too trivial or unwieldy? Is it too small, or big, and is it manageable?”

Creating something specific to your industry will also make it easier to your business to reinvest and put more money behind AI efforts. “For example, if you’re in health services and you’re looking at an AI that helps screen CVs, chances are someone out there has already done it,” Forbes said.

“Try something health specific, such as helping doctors triage and create treatment plans. That’s going to be much more successful.”

With AI talent so scarce globally, another key question is whether it makes sense to accelerate projects by getting partners to help standup and support AI projects. Forbes also asked: Are you creating value?

“What in the project is creating value for your organisation – is it creating efficiencies, allowing you to launch a new product? And how can you measure that value?”

As a final note, Forbes outlined the ‘Augmented Services Platform Canvas’ produced by Pontus Warnestal of Halmstad University and available via Creative Commons as a good tool.

“This places ethics and risk , impact and values and problems and consequences at the top, which is should,” Forbes added. “These are the most important places to start with an AI project.

“And the runway on this is extremely short. Those who have started; fantastic. Those looking to get started, I’d get that engine going pretty fast otherwise you’ll be left behind. ”

Join the newsletter!


Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments

Latest Videos

More Videos

Algorithms that can make sense of unstructured data is the future. It's great to see experts in the field getting together to discuss AI.

Sumit Takim

In pictures: Harnessing AI for customer engagement - CMO roundtable Melbourne

Read more

Real digital transformation requires reshaping the way the business create value for customers. Achieving this requires that organization...

ravi H

10 lessons Telstra has learnt through its T22 transformation

Read more


Lillian Juliet

How Winedirect has lifted customer recency, frequency and value with a digital overhaul

Read more

Having an effective Point of Sale system implemented in your retail store can streamline the transactions and data management activities....

Sheetal Kamble

​Jurlique’s move to mobile POS set to enhance customer experience

Read more

I too am regularly surprised at how little care a large swathe of consumers take over the sharing and use of their personal data. As a m...

Catherine Stenson

Have customers really changed? - Marketing edge - CMO Australia

Read more

Blog Posts

Brand storytelling lessons from Singapore’s iconic Fullerton hotel

In early 2020, I had the pleasure of staying at the newly opened Fullerton Hotel in Sydney. It was on this trip I first became aware of the Fullerton’s commitment to brand storytelling.

Gabrielle Dolan

Business storytelling leader

You’re doing it wrong: Emotion doesn’t mean emotional

If you’ve been around advertising long enough, you’ve probably seen (or written) a slide which says: “They won’t remember what you say, they’ll remember how you made them feel.” But it’s wrong. Our understanding of how emotion is used in advertising has been ill informed and poorly applied.

Zac Martin

Senior planner, Ogilvy Melbourne

Why does brand execution often kill creativity?

The launch of a new brand, or indeed a rebrand, is a transformation to be greeted with fanfare. So why is it that once the brand has launched, the brand execution phase can also be the moment at which you kill its creativity?

Rich Curtis

CEO, FutureBrand A/NZ

Sign in