AI ethics: Designing for trust

Katja Forbes

Katja Forbes founded syfte, a specialist business in research and experience design in 2014, and is an Australian pioneer in the field of experience design. Katja is an International Director on the Interaction Design Association Board. She is proud to be a co-founding member and Local Leader in Sydney of the global community organisation, Interaction Design Association. Together with Joe Ortenzi, Katja has built a community of over 1700 designers in Sydney, providing them with learning opportunities via lecture based meetups that draw a crowd of 150 people each time, a mentoring program and workshops.

As artificial intelligence (AI) becomes much more prevalent and increasingly a way of life, more questions are being asked than answered about the ethical implications of its adoption.

For example, do humans have any moral obligations towards their machine? Do ‘robots’ or AI machines have any obligations of their own? And importantly, how do we build something consumers trust even though they can’t see it?

When designing for AI, therefore, a number of questions must be put forward to ensure your organisation and the designers behind the solution are creating a product of the best ethical standards in that situation.

For starters, customers want to be able to trust you have created something that fits as closely to their ethical values as possible. These may differ from that of the designer, so it’s important to communicate these before you begin, as well as during the design itself. Just as we trust a mechanic to fix our car’s engine, or a doctor to fix what is wrong with our organs, when it comes to AI, we need to trust the designer has designed something that is indeed ethical.

That’s because a great many things critical in our lives today are happening invisibly. Brands are becoming more and more opaque in the way they’re using data, in what they’re doing with data, in how they’re making these decisions and in how they’re delivering back to customers.

To place our trust in brands as consumers, therefore, we need to be convinced that when the technology was created, it was created by someone who asked the right questions.

At an organisational level, doing your research before contracting a company to build your AI tool is one way of finding the right designer. Microsoft, for example, is keen to share its own values so clients can make an informed decision to work with the company. Values such as transparency, reliability and inclusiveness are what Microsoft’s customers place an emphasis on. It’s by constantly exhibiting these values that Microsoft ensures its retains their trust.

We also do not want our bias to be reflected in the design itself. The problem with bias, however, is we often don’t know our opinions exhibit it.

One example often used to describe how difficult the dilemma can be is what happens if an AI car lost control and were to hit a baby if it swerved left, or an elderly person if it swerved right. A designer may program the car to always swerve towards the elderly person in such a situation. But this is a subjective, rather than objective, question. The client may feel the opposite is true.

It’s also the case AI is learning from human behaviour and stereotypes. An example is Microsoft’s Tay, a chatbot on Twitter that within 24 hours became riddled with sexist, racist, Nazi biases. Another failed AI experiment was Amazon’s AI for resume screening. Having been fed 10 years’ worth of data about the types of candidates the company liked – often men – the AI started screening out CVs with any reference to ‘women’ in it. Unable to fix it, Amazon pulled the bot.

So how is this resolved? Here are a few questions to help create useful and objective AI experiences*:

  • What data content needs to be collected and why?
  • How will any data be used?
  • Who owns the data once collected?
  • What impact do we want with the AI tool?
  • What KPIs do we need to measure this?
  • How will someone’s rights be affected?
  • How will this new AI affect working conditions?
  • How will this new AI affect human rights?
  • Have we taken all perspectives into account?
  • How does this AI create a magical experience for customers and employees?
  • Do we rely on any third-party services?
  • How are stakeholders’ interests dealt with?

By giving careful thought to these answers, you will have the best opportunity to create an AI tool or experience that takes the values of all stakeholders into account.

*Reference: Augmented Service Platform Canvas –Pontus Wärnestål, Associate Professor, Halmstad University

- This article originally appeared in CMO's print magazine, Issue 1, 2019.

Tags: design thinking, customer experience management, artificial intelligence

Show Comments

Featured Whitepapers

More whitepapers

Blog Posts

3 skills you need to drive better collaboration

A study published in The Harvard Business Review found the time spent in collaborative activities at work has increased by over 50 per cent in the past two decades. Larger projects; complicated problems; tighter timeframes: These require bigger teams with specialised skillsets and diverse backgrounds, often dispersed globally.

Jen Jackson

CEO, Everyday Massive

Better the bank you know?

In 2018, only 21 per cent of customers believed that banks in general had their customers best interests at heart and behave ethically. Only 26 per cent believed that banks will keep their promises; views cemented further following the Hayne Financial Services Royal Commission.

Carolyn Pitt

Head of account management, Hulsbosch

What 15 years of emotional intelligence told us about youth media audiences

Taking people on an emotional journey through content is the most critical part of being a publisher. Which is why emotion lies at the heart of VICE Media.

Stephanie Winkler

Head of insights, VICE Asia-Pacific

It's a pretty good idea. I think this integration is useful. Don't you agree?

Misty Stoll

Officeworks hops on voice interface bandwagon with Google Assistant integration

Read more

ok. so no RCS support? by the way, RCS is a lot bigger than 5G in terms of marketing and monetisation so y'all should be covering it.

DragoCubed

Optus goes for education with 5G network campaign

Read more

Many companies and individual merchants have shifted their major part of marketing to web marketing services Portland as it weighs fewer ...

Radiata Solutions

6 Ways to ramp up Social Media to Your Web Design

Read more

This is a unique experience! Will be interesting to talk to their managers.

Joyce Harris

​How Krispy Kreme revitalised its brand in a saturated market

Read more

I feel bad for them. It's a shame they are closed now. What do you think about it?

Lisa Deleon

Dick Smith stores set to all close by 30 April

Read more

Latest Podcast

More podcasts

Sign in