AI Ethics Part 1: Understanding and Identifying bias in our algorithms

Kshira Saagar

  • Chief data officer, Latitude Financial Services
Kshira Saagar (the K is silent like in a Knight) is currently the Chief Data Officer of Latitude Financial Services and has spent 22.3% of his life helping key decision-makers and CxOs make smarter decisions using data, and strongly believes that every organisation can become truly data-driven. Outside work, Kshira spends a lot of time on advancing data literacy initiatives for high school and undergrad students.

AI (Artificial Intelligence) driven algorithms are becoming an essential part of modern businesses. As we start to rely more on AI to make critical decisions in our organisations, it becomes essential to ensure they are ethically made and are free from unjust biases. There is a need for Responsible AI systems that make transparent, explainable and accountable decisions, with a conscious understanding of the various AI and data biases that can undermine them.

This article explores the various forms of AI bias, ways to understand and identify them in our algorithms and other decision-making tools. It is worth noting AI bias can result in unfairness, which in some situations can amount to unlawful discrimination or other forms of illegality. The concept of governing algorithm bias is made more pertinent in organisations now owing to the commoditisation of machine learning capability to deliver hyper-personalised customer experience at scale in the form of recommendation engines, credit decision tools and more.

What is AI bias?

AI bias describes systematic and repetitive errors caused by a digital system that leads to unfair outcomes, such as privileging one arbitrary group of users over others. Bias can manifest due to various factors - the design of the algorithm, poor inaccurate data collection, or worst of all, unfair and biased people operating the decisioning systems. AI bias is prevalent in all systems in our modern society that use supportive ‘artificial intelligence’, ranging as widely as search engines, social media platforms, credit decisioning, law enforcement and recruitment.

It is worth noting this AI bias can be both intentional and unintentional - it is a common myth that the bias can only be intentional, leading to incorrect/inadequate control measures. For example, a credit scoring algorithm that recommends loan approval to one group of users but denies loans to another set of nearly identical users based on unrelated financial criteria, is a biased algorithm, if this behaviour can be repeated across multiple occurrences systematically. Both intentional and unintentional AI bias can be construed as unlawful and discriminatory.

Understanding and identifying AI bias

Bias can be introduced to an algorithm in several ways. The three broad types of bias are:

1. Dataset bias – Inaccuracies or systematic errors in the dataset used for the algorithm, namely:

  • Historical bias is the pre-existing bias in the world that has seeped into our data.
  • Representation bias results from incorrect sampling of a population to create a dataset
  • Measurement bias occurs when biased proxy metrics are used in lieu of protected variables

2. Modelling bias – Incorrect algorithm design and unexplainable techniques used, namely:

  • Evaluation bias occurs when an algorithm is evaluated using incorrect parameters
  • Aggregation bias happens when one algorithm is force fit for diverse sets of people

3. Human Confirmation bias - Even if the algorithm makes unbiased predictions, a human reviewer can introduce their own biases when they accept or disregard the output.

A human reviewer might override a fair outcome, based on their own systemic bias. An example could be: “I know that demographic, and they never perform well. So, the algorithm must be wrong.”

At a fundamental level, bias is inherently present in the world around us and encoded into our society. While we can’t directly solve the bias in the real world, we can take measures to weed out bias from our data, our models, and our human review process.

Key risks from biased AI usage

While the algorithms serve to make the decisioning smarter and more relevant to humans and other organisations, there are also significant risks associated with their usage. There are risks in developing an erroneous understanding of individuals and some of the risks that stand out are:

Errors which unfairly disadvantage people by reference to their sex, race, age, or other such characteristics which could be construed as propagating systemic bias.

When AI systems produce unfair results, this may sometimes meet the technical legal definition of unlawful discrimination. Regardless of the strict legal position, this bias could disproportionately affect people who already experience disadvantage in society.

Risks of harm must be considered in the context in which they arise—the consequences of unfair outcomes are more serious when considering equal access to an essential right/service.

Risk of opaque consumer targeting and price discrimination. The Australian Competition and Consumer Commission (ACCC) describes these types of harm to consumers as ‘risks from increased consumer profiling’ and ‘discrimination and exclusion’.

Risk of reduced consumer trust and engagement owing to limited consumer choice and control driven by unfair algorithms and perception that organisations employs such algorithms

Tenets to live by for Responsible AI Systems: REAL

Every organisation journeying on the path of building their machine learning and AI muscle, must not only resolve to build responsible AI systems that follow the four tenets of good governance that include Reproducibility, Explainability, Accountability and Learnability, but also create forums to keep their teams and stakeholders accountable to the outcomes of these algorithms.

Reproducible – Algorithm’s input data and process must be repeatable for any point in time, and be built using standardised enterprise-wide architecture for operationalisation.

Explainable – Outcomes of the algorithm must be explainable to both technical and non-technical users and hold up to logical and legal scrutiny.

Accountable – Algorithms must be built within the appropriate governing metrics for bias and fairness, and ensure a 'human-in-the-loop' exists to validate the outcomes.

Learnable – Algorithms must be able to learn and relearn via a secure feedback loop, while also constantly monitoring for drift/change in expected model behaviour.

What Now?

Bias in our algorithms pose a significant threat not only to customers or other people for whom the decision is made, but also offer legal risks and liabilities for organisations. Ensuring fairness in our AI systems will be an ongoing process, with a combination of technology, process and people changes needed to make them successful.

In the next article, we’ll look at all the people, technology and process driven methods to mitigate for this bias in our algorithms, along with tangible examples that everyone can implement at their organisations to MYOD - Make Your Organisation Data-Driven, in the truer sense of the word.  

Tags: digital marketing, data-driven marketing, machine learning, data anlaytics, artificial intelligence (AI)

Show Comments

Latest Whitepapers

State of the CMO 2021

CMO’s State of the CMO is an annual industry research initiative aimed at gauging how ...

More whitepapers

Latest Videos

More Videos

Focus on your customer experience not your NPS score. Fix the fucking problems and the customer support requests will go away.I currently...

Chris B

Bringing community thinking to Optus' customer service team

Read more

Nice blog!Blog is really informative , valuable.keep updating us with such amazing blogs.influencer agency in Melbourne

Rajat Kumar

Why flipping Status Quo Bias is the key to B2B marketing success

Read more

good this information are very helpful for millions of peoples customer loyalty Consultant is an important part of every business.

Tom Devid

Report: 4 ways to generate customer loyalty

Read more

Great post, thanks for sharing such a informative content.

CodeWare Limited

APAC software company brings on first VP of growth

Read more

This article highlights Gartner’s latest digital experience platforms report and how they are influencing content operations ecosystems. ...

vikram Roy

Gartner 2022 Digital Experience Platforms reveals leading vendor players

Read more

Blog Posts

From unconscious to reflective: What level of data user are you?

Using data is a hot topic right now. Leaders are realising data can no longer just be the responsibility of dedicated analysts or staff with ‘data’ in their title or role description.

Dr Selena Fisk

Data expert, author

Whose responsibility is it to set the ground rules for agency collaboration?

It’s not that your agencies don’t have your best interests at heart – most of them do. But the only way to ensure they’re 100 per cent focused on your business and not growing theirs by scope creep is by setting the guard rails for healthy agency collaboration.

Andrew Pascoe

Head of planning, Hatched

AI Ethics Part 2: Mitigating bias in our algorithms

In first part of this article series, we explored the various forms of AI bias, ways to understand and identify them. This second part will cover various tangible measures that can be undertaken to control, mitigate or remove these biases.

Kshira Saagar

Chief data officer, Latitude Financial Services

Sign in