AI ethics: Designing for trust

Katja Forbes

  • Managing director of Designit, Australia and New Zealand
Katja is an Australian pioneer in the field of experience design and all its components.Back in 2014, Katja founded syfte, a specialist business in research and experience design acquired by Wipro in 2018. She was then appointed Australian MD of Designit. Katja was also a co-founding member of the Interaction Design Association Board (IxDA) in Sydney, helping build a community of over 1600 designers. Today, Katja is international director on IxDA’s global board. Katja is a sought-after speaker for organisations including Women In Design, Leaders in Heels, Women In Commerce, Code like a Girl, Telstra and Macquarie Bank. Katja was recognised as a Top 10 Australian Women Entrepreneurs 2018 by My Entrepreneur Magazine and one of the 100 Women of Influence by Westpac and the Australian Financial Review in 2016.

As artificial intelligence (AI) becomes much more prevalent and increasingly a way of life, more questions are being asked than answered about the ethical implications of its adoption.

For example, do humans have any moral obligations towards their machine? Do ‘robots’ or AI machines have any obligations of their own? And importantly, how do we build something consumers trust even though they can’t see it?

When designing for AI, therefore, a number of questions must be put forward to ensure your organisation and the designers behind the solution are creating a product of the best ethical standards in that situation.

For starters, customers want to be able to trust you have created something that fits as closely to their ethical values as possible. These may differ from that of the designer, so it’s important to communicate these before you begin, as well as during the design itself. Just as we trust a mechanic to fix our car’s engine, or a doctor to fix what is wrong with our organs, when it comes to AI, we need to trust the designer has designed something that is indeed ethical.

That’s because a great many things critical in our lives today are happening invisibly. Brands are becoming more and more opaque in the way they’re using data, in what they’re doing with data, in how they’re making these decisions and in how they’re delivering back to customers.

To place our trust in brands as consumers, therefore, we need to be convinced that when the technology was created, it was created by someone who asked the right questions.

At an organisational level, doing your research before contracting a company to build your AI tool is one way of finding the right designer. Microsoft, for example, is keen to share its own values so clients can make an informed decision to work with the company. Values such as transparency, reliability and inclusiveness are what Microsoft’s customers place an emphasis on. It’s by constantly exhibiting these values that Microsoft ensures its retains their trust.

We also do not want our bias to be reflected in the design itself. The problem with bias, however, is we often don’t know our opinions exhibit it.

One example often used to describe how difficult the dilemma can be is what happens if an AI car lost control and were to hit a baby if it swerved left, or an elderly person if it swerved right. A designer may program the car to always swerve towards the elderly person in such a situation. But this is a subjective, rather than objective, question. The client may feel the opposite is true.

It’s also the case AI is learning from human behaviour and stereotypes. An example is Microsoft’s Tay, a chatbot on Twitter that within 24 hours became riddled with sexist, racist, Nazi biases. Another failed AI experiment was Amazon’s AI for resume screening. Having been fed 10 years’ worth of data about the types of candidates the company liked – often men – the AI started screening out CVs with any reference to ‘women’ in it. Unable to fix it, Amazon pulled the bot.

So how is this resolved? Here are a few questions to help create useful and objective AI experiences*:

  • What data content needs to be collected and why?
  • How will any data be used?
  • Who owns the data once collected?
  • What impact do we want with the AI tool?
  • What KPIs do we need to measure this?
  • How will someone’s rights be affected?
  • How will this new AI affect working conditions?
  • How will this new AI affect human rights?
  • Have we taken all perspectives into account?
  • How does this AI create a magical experience for customers and employees?
  • Do we rely on any third-party services?
  • How are stakeholders’ interests dealt with?

By giving careful thought to these answers, you will have the best opportunity to create an AI tool or experience that takes the values of all stakeholders into account.

*Reference: Augmented Service Platform Canvas –Pontus Wärnestål, Associate Professor, Halmstad University

- This article originally appeared in CMO's print magazine, Issue 1, 2019.

Tags: design thinking, customer experience management, artificial intelligence

Show Comments

Featured Whitepapers

State of the CMO 2021

CMO’s State of the CMO is an annual industry research initiative aimed at gauging how ...

More whitepapers

Latest Videos

Launch marketing council Episode 5: Retailer and supplier

In our fifth and final episode, we delve into the relationship between retailer and supplier and how it drives and influences launch marketing strategies and success. To do that, we’re joined by Campbell Davies, group general manager of Associated Retailers Limited, and Kristin Viccars, marketing director A/NZ, Apex Tool Group. Also featured are Five by Five Global managing director, Matt Lawton, and CMO’s Nadia Cameron.

More Videos

Thanks for nice information regarding Account-based Marketing. PRO IT MELBOURNE is best SEO Agency in Melbourne have a team of profession...

PRO IT MELBOURNE

Cultivating engaging content in Account-based Marketing (ABM)

Read more

The best part: optimizing your site for SEO enables you to generate high traffic, and hence free B2B lead generation. This is done throug...

Sergiu Alexei

The top 6 content challenges facing B2B firms

Read more

Nowadays, when everything is being done online, it is good to know that someone is trying to make an improvement. As a company, you are o...

Marcus

10 lessons Telstra has learnt through its T22 transformation

Read more

Check out tiny twig for comfy and soft organic baby clothes.

Morgan mendoza

Binge and The Iconic launch Inactivewear clothing line

Read more

NetSuite started out as a cloud-based provider of Enterprise Resource Planning software or as NetSuite solution provider, which companies...

talalyousaf

NetSuite to acquire Bronto's digital marketing platform for US$200m

Read more

Blog Posts

Getting privacy right in a first-party data world

With continued advances in marketing technology, data privacy continues to play catchup in terms of regulation, safety and use. The laws that do exist are open to interpretation and potential misuse and that has led to consumer mistrust and increasing calls for a stronger regulatory framework to protect personal information.

Furqan Wasif

Head of biddable media, Tug

​Beyond greenwashing: Why brands need to get their house in order first

Environmental, Social and (Corporate) Governance is a hot topic for brands right now. But before you start thinking about doing good, Craig Flanders says you best sort out the basics.

Craig Flanders

CEO, Spinach

​The value of collaboration: how to keep it together

Through the ages, from the fields to the factories to the office towers and now to our kitchen tables, collaboration has played a pivotal role in how we live and work. Together. We find partners, live as families, socialise in groups and work as teams. Ultimately, we rely on these collaborative structures to survive and thrive.

Rich Curtis

CEO, FutureBrand A/NZ

Sign in