AI Ethics Part 1: Understanding and Identifying bias in our algorithms

Kshira Saagar

  • Chief data officer, Latitude Financial Services
Kshira Saagar (the K is silent like in a Knight) is currently the Chief Data Officer of Latitude Financial Services and has spent 22.3% of his life helping key decision-makers and CxOs make smarter decisions using data, and strongly believes that every organisation can become truly data-driven. Outside work, Kshira spends a lot of time on advancing data literacy initiatives for high school and undergrad students.

AI (Artificial Intelligence) driven algorithms are becoming an essential part of modern businesses. As we start to rely more on AI to make critical decisions in our organisations, it becomes essential to ensure they are ethically made and are free from unjust biases. There is a need for Responsible AI systems that make transparent, explainable and accountable decisions, with a conscious understanding of the various AI and data biases that can undermine them.

This article explores the various forms of AI bias, ways to understand and identify them in our algorithms and other decision-making tools. It is worth noting AI bias can result in unfairness, which in some situations can amount to unlawful discrimination or other forms of illegality. The concept of governing algorithm bias is made more pertinent in organisations now owing to the commoditisation of machine learning capability to deliver hyper-personalised customer experience at scale in the form of recommendation engines, credit decision tools and more.

What is AI bias?

AI bias describes systematic and repetitive errors caused by a digital system that leads to unfair outcomes, such as privileging one arbitrary group of users over others. Bias can manifest due to various factors - the design of the algorithm, poor inaccurate data collection, or worst of all, unfair and biased people operating the decisioning systems. AI bias is prevalent in all systems in our modern society that use supportive ‘artificial intelligence’, ranging as widely as search engines, social media platforms, credit decisioning, law enforcement and recruitment.

It is worth noting this AI bias can be both intentional and unintentional - it is a common myth that the bias can only be intentional, leading to incorrect/inadequate control measures. For example, a credit scoring algorithm that recommends loan approval to one group of users but denies loans to another set of nearly identical users based on unrelated financial criteria, is a biased algorithm, if this behaviour can be repeated across multiple occurrences systematically. Both intentional and unintentional AI bias can be construed as unlawful and discriminatory.

Understanding and identifying AI bias

Bias can be introduced to an algorithm in several ways. The three broad types of bias are:

1. Dataset bias – Inaccuracies or systematic errors in the dataset used for the algorithm, namely:

  • Historical bias is the pre-existing bias in the world that has seeped into our data.
  • Representation bias results from incorrect sampling of a population to create a dataset
  • Measurement bias occurs when biased proxy metrics are used in lieu of protected variables

2. Modelling bias – Incorrect algorithm design and unexplainable techniques used, namely:

  • Evaluation bias occurs when an algorithm is evaluated using incorrect parameters
  • Aggregation bias happens when one algorithm is force fit for diverse sets of people

3. Human Confirmation bias - Even if the algorithm makes unbiased predictions, a human reviewer can introduce their own biases when they accept or disregard the output.

A human reviewer might override a fair outcome, based on their own systemic bias. An example could be: “I know that demographic, and they never perform well. So, the algorithm must be wrong.”

At a fundamental level, bias is inherently present in the world around us and encoded into our society. While we can’t directly solve the bias in the real world, we can take measures to weed out bias from our data, our models, and our human review process.

Key risks from biased AI usage

While the algorithms serve to make the decisioning smarter and more relevant to humans and other organisations, there are also significant risks associated with their usage. There are risks in developing an erroneous understanding of individuals and some of the risks that stand out are:

Errors which unfairly disadvantage people by reference to their sex, race, age, or other such characteristics which could be construed as propagating systemic bias.

When AI systems produce unfair results, this may sometimes meet the technical legal definition of unlawful discrimination. Regardless of the strict legal position, this bias could disproportionately affect people who already experience disadvantage in society.

Risks of harm must be considered in the context in which they arise—the consequences of unfair outcomes are more serious when considering equal access to an essential right/service.

Risk of opaque consumer targeting and price discrimination. The Australian Competition and Consumer Commission (ACCC) describes these types of harm to consumers as ‘risks from increased consumer profiling’ and ‘discrimination and exclusion’.

Risk of reduced consumer trust and engagement owing to limited consumer choice and control driven by unfair algorithms and perception that organisations employs such algorithms

Tenets to live by for Responsible AI Systems: REAL

Every organisation journeying on the path of building their machine learning and AI muscle, must not only resolve to build responsible AI systems that follow the four tenets of good governance that include Reproducibility, Explainability, Accountability and Learnability, but also create forums to keep their teams and stakeholders accountable to the outcomes of these algorithms.

Reproducible – Algorithm’s input data and process must be repeatable for any point in time, and be built using standardised enterprise-wide architecture for operationalisation.

Explainable – Outcomes of the algorithm must be explainable to both technical and non-technical users and hold up to logical and legal scrutiny.

Accountable – Algorithms must be built within the appropriate governing metrics for bias and fairness, and ensure a 'human-in-the-loop' exists to validate the outcomes.

Learnable – Algorithms must be able to learn and relearn via a secure feedback loop, while also constantly monitoring for drift/change in expected model behaviour.

What Now?

Bias in our algorithms pose a significant threat not only to customers or other people for whom the decision is made, but also offer legal risks and liabilities for organisations. Ensuring fairness in our AI systems will be an ongoing process, with a combination of technology, process and people changes needed to make them successful.

In the next article, we’ll look at all the people, technology and process driven methods to mitigate for this bias in our algorithms, along with tangible examples that everyone can implement at their organisations to MYOD - Make Your Organisation Data-Driven, in the truer sense of the word.  

Tags: digital marketing, data-driven marketing, machine learning, data anlaytics, artificial intelligence (AI)

Show Comments

Latest Whitepapers

More whitepapers

Latest Videos

More Videos

More Brand Posts

Blog Posts

Marketing prowess versus the enigma of the metaverse

Flash back to the classic film, Willy Wonka and the Chocolate Factory. Television-obsessed Mike insists on becoming the first person to be ‘sent by Wonkavision’, dematerialising on one end, pixel by pixel, and materialising in another space. His cinematic dreams are realised thanks to rash decisions as he is shrunken down to fit the digital universe, followed by a trip to the taffy puller to return to normal size.

Liz Miller

VP, Constellation Research

Why Excellent Leadership Begins with Vertical Growth

Why is it there is no shortage of leadership development materials, yet outstanding leadership is so rare? Despite having access to so many leadership principles, tools, systems and processes, why is it so hard to develop and improve as a leader?

Michael Bunting

Author, leadership expert

More than money talks in sports sponsorship

As a nation united by sport, brands are beginning to learn money alone won’t talk without aligned values and action. If recent events with major leagues and their players have shown us anything, it’s the next generation of athletes are standing by what they believe in – and they won’t let their values be superseded by money.

Simone Waugh

Managing Director, Publicis Queensland

Sign in