AI ethics: Designing for trust

Katja Forbes

Katja Forbes is an Australian pioneer in the field of experience design and all of its components – research, emerging technology as well as service design, customer experience (CX) and user experience (UX). She is the managing director of Designit, Australia and New Zealand, a strategic design firm who work with ambitious brands to create high-impact products, services, systems and spaces – that people love. Katja is proud to be International Director on the Global Board of the Interaction Design Association Board (IxDA). Back in 2014, Katja founded syfte, a specialist business in research and experience design. In late 2018, her business was acquired by the international firm Wipro, and she was announced Australian Managing Director of Designit. Katja was also a co-founding member of IxDA Sydney. Together with Joe Ortenzi, she has built a community of over 1600 designers in Sydney, providing them with learning opportunities via lecture-based meetups that draw a crowd of 150 people each time, a mentoring program and workshops. One of Katja’s personal motivations is to inspire other women, especially in her industry, to reach toward their definition of professional success. She is a sought-after keynote speaker for various organisations and associations including Women In Design, Leaders in Heels, Women In Commerce, Code like a Girl (Melbourne Knowledge week), Inaugural CMO Conference 2017, Telstra and Macquarie Bank. Katja was recognised as one of the Top 10 Australian Women Entrepreneurs 2018 by My Entrepreneur Magazine and named one of the 100 Women of Influence by Westpac and the Australian Financial Review in 2016.

As artificial intelligence (AI) becomes much more prevalent and increasingly a way of life, more questions are being asked than answered about the ethical implications of its adoption.

For example, do humans have any moral obligations towards their machine? Do ‘robots’ or AI machines have any obligations of their own? And importantly, how do we build something consumers trust even though they can’t see it?

When designing for AI, therefore, a number of questions must be put forward to ensure your organisation and the designers behind the solution are creating a product of the best ethical standards in that situation.

For starters, customers want to be able to trust you have created something that fits as closely to their ethical values as possible. These may differ from that of the designer, so it’s important to communicate these before you begin, as well as during the design itself. Just as we trust a mechanic to fix our car’s engine, or a doctor to fix what is wrong with our organs, when it comes to AI, we need to trust the designer has designed something that is indeed ethical.

That’s because a great many things critical in our lives today are happening invisibly. Brands are becoming more and more opaque in the way they’re using data, in what they’re doing with data, in how they’re making these decisions and in how they’re delivering back to customers.

To place our trust in brands as consumers, therefore, we need to be convinced that when the technology was created, it was created by someone who asked the right questions.

At an organisational level, doing your research before contracting a company to build your AI tool is one way of finding the right designer. Microsoft, for example, is keen to share its own values so clients can make an informed decision to work with the company. Values such as transparency, reliability and inclusiveness are what Microsoft’s customers place an emphasis on. It’s by constantly exhibiting these values that Microsoft ensures its retains their trust.

We also do not want our bias to be reflected in the design itself. The problem with bias, however, is we often don’t know our opinions exhibit it.

One example often used to describe how difficult the dilemma can be is what happens if an AI car lost control and were to hit a baby if it swerved left, or an elderly person if it swerved right. A designer may program the car to always swerve towards the elderly person in such a situation. But this is a subjective, rather than objective, question. The client may feel the opposite is true.

It’s also the case AI is learning from human behaviour and stereotypes. An example is Microsoft’s Tay, a chatbot on Twitter that within 24 hours became riddled with sexist, racist, Nazi biases. Another failed AI experiment was Amazon’s AI for resume screening. Having been fed 10 years’ worth of data about the types of candidates the company liked – often men – the AI started screening out CVs with any reference to ‘women’ in it. Unable to fix it, Amazon pulled the bot.

So how is this resolved? Here are a few questions to help create useful and objective AI experiences*:

  • What data content needs to be collected and why?
  • How will any data be used?
  • Who owns the data once collected?
  • What impact do we want with the AI tool?
  • What KPIs do we need to measure this?
  • How will someone’s rights be affected?
  • How will this new AI affect working conditions?
  • How will this new AI affect human rights?
  • Have we taken all perspectives into account?
  • How does this AI create a magical experience for customers and employees?
  • Do we rely on any third-party services?
  • How are stakeholders’ interests dealt with?

By giving careful thought to these answers, you will have the best opportunity to create an AI tool or experience that takes the values of all stakeholders into account.

*Reference: Augmented Service Platform Canvas –Pontus Wärnestål, Associate Professor, Halmstad University

- This article originally appeared in CMO's print magazine, Issue 1, 2019.

Tags: design thinking, customer experience management, artificial intelligence

Show Comments

Featured Whitepapers

More whitepapers

Latest Videos

More Videos

Google collects as much data as it can about you. It would be foolish to believe Google cares about your privacy. I did cut off Google fr...

Phil Davis

ACCC launches fresh legal challenge against Google's consumer data practices for advertising

Read more

“This new logo has been noticed and it replaces a logo no one really knew existed so I’d say it’s abided by the ‘rule’ of brand equity - ...

Lawrence

Brand Australia misses the mark

Read more

IMHO a logo that needs to be explained really doesn't achieve it's purpose.I admit coming to the debate a little late, but has anyone els...

JV_at_lAttitude_in_Cairns

Brand Australia misses the mark

Read more

Hi everyone! Hope you are doing well. I just came across your website and I have to say that your work is really appreciative. Your conte...

Rochie Grey

Will 3D printing be good for retail?

Read more

Very insightful. Executive leaders can let middle managers decide on the best course of action for the business and once these plans are ...

Abi TCA

CMOs: Let middle managers lead radical innovation

Read more

Blog Posts

The obvious reason Covidsafe failed to get majority takeup

Online identity is a hot topic as more consumers are waking up to how their data is being used. So what does the marketing industry need to do to avoid a complete loss of public trust, in instances such as the COVID-19 tracing app?

Dan Richardson

Head of data, Verizon Media

Brand or product placement?

CMOs are looking to ensure investment decisions in marketing initiatives are good value for money. Yet they are frustrated in understanding the value of product placements within this mix for a very simple reason: Product placements are broadly defined and as a result, mean very different things to different people.

Michael Neale and Dr David Corkindale

University of Adelaide Business School and University of South Australia

Why CMOs need a clear voice strategy to connect with their customers

Now more than ever, voice presents a clear opportunity to add value to an organisation in many ways. Where operational efficiencies are scrutinised, budgets are tighter and discretionary consumer spend at a low, engaging with an audience is difficult.

Guy Munro

Head of innovation and technology, Paper + Spark

Sign in