Is artificial intelligence riddled with bias?

Katja Forbes

Katja Forbes founded syfte, a specialist business in research and experience design in 2014, and is an Australian pioneer in the field of experience design. Katja is an International Director on the Interaction Design Association Board. She is proud to be a co-founding member and Local Leader in Sydney of the global community organisation, Interaction Design Association. Together with Joe Ortenzi, Katja has built a community of over 1700 designers in Sydney, providing them with learning opportunities via lecture based meetups that draw a crowd of 150 people each time, a mentoring program and workshops.


The purpose of Artificial Intelligence (AI) has always been to replace the menial and repetitive tasks we do each day in every sector, so that we can concentrate on doing what we do best. Saving time and money has certainly been a decent outcome as AI infiltrates the business landscape, however, now we are starting to see problems that cause major issues in practice.  

One of the first major problems that created bias in its solution was recorded back in 2014, when Amazon created an AI tool to evaluate job seekers’ resumes. The tool worked perfectly aside from one major concern – it turned out Amazon was biased towards men.    

In other words, it turned out the algorithm Amazon staff had literally spent years creating, housed this fatal flaw. In the end, the only thing that could be done was to abandon the AI altogether, as it couldn’t guarantee it wouldn’t show bias toward male applicants.  

The discrimination shown by the AI tool puzzled many at first. Obviously, it wasn’t specifically created to be discriminatory toward any job seeker, regardless of their sex, age or nationality.  However, this discrimination happened organically. Developers of this tool realised they were showing it the resumes of all those who had secured positions over the past 10 years, and, naturally, more men had applied for tech-related positions and been successful, simply because this is how this male-dominated industry has always been skewed.  

This example demonstrates the problems with creating AI tools using the past as an example of creating a machine-operated future. The main aim is clearly to create a future where menial tasks don’t factor and staff can focus on the more important skilled tasks. However, it is easy to see unwanted nuances can creep in when these tools are being created, such as a strong preference for resumes for male applicants simply because it has been exposed to successful past employees, and these turned out to be male.  

The thing is AIs cannot afford to be biased, but exposing it to biased data will simply ensure the AI will take it on-board.  

We are now left with the following dilemma. We all want to have access to artificial intelligence that will assist in our workplaces and create consistent and objective results. However, to assist with its learning, all we have is data that already exists, and this data is often far from objective.   

Since the data we have access to in order to create various AI tools is bias in nature, is there a way we can strip that back to ensure our AI tools have a clean slate and are free from discrimination?  

As AI bias is a real concern that simply cannot be accepted, a number of tools have been created to erase it. For example, IBM has recently launched a tool that will scan for bias in AI algorithms and recommend adjustments in real time.   

Named the AI Fairness 360 Python package, it includes a comprehensive set of metrics for datasets and models to test for biases, explanations for these metrics, and, importantly, algorithms to mitigate bias in datasets and models. Tools like this are becoming increasingly popular in demand as AI increases in use and starts making more important decisions on our behalf.   

We simply cannot rely on AI use that has shown bias in any way, shape or form, as our entire democracy is eroded simply for the sake of a technological solution that we created to make our lives easier.

 

 

Tags: digital marketing, customer experience management

Show Comments

Featured Whitepapers

State of the CMO 2018

CMO's State of the CMO is an annual research initiative aimed at understanding how Australian ...

More whitepapers

Blog Posts

The purpose of purpose

Everyone knows the 4 P’s of marketing: Price, product, promotion and place. There is now a fifth ‘P’ in the marketing mix.

What does it take to be an innovation leader?

To thrive in an increasingly complex and unpredictable new world, organisations will need to innovate, and leaders will require the necessary skills to do so. Here,

Evette Cordy

Co-founder, Agents of Spring

3 marketing mistakes to overcome when courting prospective customers

Marketing that urges respondents to ‘buy now’ is a little like asking someone to marry you on your first date. At any time, only 3 per cent of the market is looking for what you’re selling, so the chances of your date randomly being ‘The One’ is pretty slim.

Sabri Suby

Founder, King Kong

An article largely re-purposed from things other people have said 5+ years ago – what's more, they said it better.

hogwash

The purpose of purpose - Brand science - CMO Australia

Read more

talking about the 4 p's:https://www.tripleclicks.co...

kigoma divin

The purpose of purpose - Brand science - CMO Australia

Read more

Interesting blog, good information is provided regarding Design Thinking For Business Strategy. Was very useful, thanks for sharing the b...

Tanya Sharma

Is design thinking the answer for the next generation of marketing?

Read more

Hi all,Good Post! Thank you so much for sharing this pretty post, it was so good to read and useful to improve my knowledge as updated on...

Jules john

To DMP or not to DMP? - Marketing automation - CMO Australia

Read more

Red Agency YouGov Galaxy Report, February 2019 Predictors Study. https://redagency.com.au/re...

Vanessa Skye Mitchell

DNA-based marketing: The next big thing?

Read more

Latest Podcast

More podcasts

Sign in