Is artificial intelligence riddled with bias?

Katja Forbes

  • Managing director of Designit, Australia and New Zealand
Katja is an Australian pioneer in the field of experience design and all its components.Back in 2014, Katja founded syfte, a specialist business in research and experience design acquired by Wipro in 2018. She was then appointed Australian MD of Designit. Katja was also a co-founding member of the Interaction Design Association Board (IxDA) in Sydney, helping build a community of over 1600 designers. Today, Katja is international director on IxDA’s global board. Katja is a sought-after speaker for organisations including Women In Design, Leaders in Heels, Women In Commerce, Code like a Girl, Telstra and Macquarie Bank. Katja was recognised as a Top 10 Australian Women Entrepreneurs 2018 by My Entrepreneur Magazine and one of the 100 Women of Influence by Westpac and the Australian Financial Review in 2016.


The purpose of Artificial Intelligence (AI) has always been to replace the menial and repetitive tasks we do each day in every sector, so that we can concentrate on doing what we do best. Saving time and money has certainly been a decent outcome as AI infiltrates the business landscape, however, now we are starting to see problems that cause major issues in practice.  

One of the first major problems that created bias in its solution was recorded back in 2014, when Amazon created an AI tool to evaluate job seekers’ resumes. The tool worked perfectly aside from one major concern – it turned out Amazon was biased towards men.    

In other words, it turned out the algorithm Amazon staff had literally spent years creating, housed this fatal flaw. In the end, the only thing that could be done was to abandon the AI altogether, as it couldn’t guarantee it wouldn’t show bias toward male applicants.  

The discrimination shown by the AI tool puzzled many at first. Obviously, it wasn’t specifically created to be discriminatory toward any job seeker, regardless of their sex, age or nationality.  However, this discrimination happened organically. Developers of this tool realised they were showing it the resumes of all those who had secured positions over the past 10 years, and, naturally, more men had applied for tech-related positions and been successful, simply because this is how this male-dominated industry has always been skewed.  

This example demonstrates the problems with creating AI tools using the past as an example of creating a machine-operated future. The main aim is clearly to create a future where menial tasks don’t factor and staff can focus on the more important skilled tasks. However, it is easy to see unwanted nuances can creep in when these tools are being created, such as a strong preference for resumes for male applicants simply because it has been exposed to successful past employees, and these turned out to be male.  

The thing is AIs cannot afford to be biased, but exposing it to biased data will simply ensure the AI will take it on-board.  

We are now left with the following dilemma. We all want to have access to artificial intelligence that will assist in our workplaces and create consistent and objective results. However, to assist with its learning, all we have is data that already exists, and this data is often far from objective.   

Since the data we have access to in order to create various AI tools is bias in nature, is there a way we can strip that back to ensure our AI tools have a clean slate and are free from discrimination?  

As AI bias is a real concern that simply cannot be accepted, a number of tools have been created to erase it. For example, IBM has recently launched a tool that will scan for bias in AI algorithms and recommend adjustments in real time.   

Named the AI Fairness 360 Python package, it includes a comprehensive set of metrics for datasets and models to test for biases, explanations for these metrics, and, importantly, algorithms to mitigate bias in datasets and models. Tools like this are becoming increasingly popular in demand as AI increases in use and starts making more important decisions on our behalf.   

We simply cannot rely on AI use that has shown bias in any way, shape or form, as our entire democracy is eroded simply for the sake of a technological solution that we created to make our lives easier.

 

 

Tags: digital marketing, customer experience management

Show Comments
cmo-xs-promo

Latest Whitepapers

State of the CMO 2021

CMO’s State of the CMO is an annual industry research initiative aimed at gauging how ...

More whitepapers

Latest Videos

More Videos

Nice blog!Blog is really informative , valuable.keep updating us with such amazing blogs.influencer agency in Melbourne

Rajat Kumar

Why flipping Status Quo Bias is the key to B2B marketing success

Read more

good this information are very helpful for millions of peoples customer loyalty Consultant is an important part of every business.

Tom Devid

Report: 4 ways to generate customer loyalty

Read more

Great post, thanks for sharing such a informative content.

CodeWare Limited

APAC software company brings on first VP of growth

Read more

This article highlights Gartner’s latest digital experience platforms report and how they are influencing content operations ecosystems. ...

vikram Roy

Gartner 2022 Digital Experience Platforms reveals leading vendor players

Read more

What about this one FormDesigner.pro? I think it's a great platform providing a lot of options, you can collect different data and work w...

Salvador Lopez

Gartner highlights four content marketing platform players as leaders

Read more

Blog Posts

​Why we need to look at the whole brand puzzle, not just play with the pieces

Creating meaningful brands should be a holistic and considered process. However, all too frequently it’s one that is disparate and reactive, where one objective is prioritized at the expense of all others. So, what are the key pieces to the ‘good’ brand puzzle?

Marketing overseas? 4 ways to make your message stick

Companies encounter a variety of challenges when it comes to marketing overseas. Marketing departments often don’t know much about the business and cultural context of the international audiences they are trying to reach. Sometimes they are also unsure about what kind of marketing they should be doing.

Cynthia Dearin

Author, business strategist, advisor

From unconscious to reflective: What level of data user are you?

Using data is a hot topic right now. Leaders are realising data can no longer just be the responsibility of dedicated analysts or staff with ‘data’ in their title or role description.

Dr Selena Fisk

Data expert, author

Sign in