AI Ethics Part 2: Mitigating bias in our algorithms

Kshira Saagar

  • Chief data officer, Latitude Financial Services
Kshira Saagar (the K is silent like in a Knight) is currently the Chief Data Officer of Latitude Financial Services and has spent 22.3% of his life helping key decision-makers and CxOs make smarter decisions using data, and strongly believes that every organisation can become truly data-driven. Outside work, Kshira spends a lot of time on advancing data literacy initiatives for high school and undergrad students.

As we start to rely more on AI to make critical business decisions, it’s essential to ensure they are ethically made and are free from unjust biases.

In first part of this article series, we explored the various forms of AI bias, ways to understand and identify them. This second part will cover various tangible measures that can be undertaken to control, mitigate or remove these biases.

It is worth noting that while all these measures are effective, not all organisations need to attempt to employ all of them at one go. Depending on the organisation’s AI maturity and adoption status, each of these mitigation measures can be deployed as needed.

Controlling for AI bias through tangible measures

A quick recap to the three kinds of biases that exist in our algorithms. These are: Dataset bias, Modelling bias and Human Confirmation bias. To mitigate these three different types of algorithm biases, the following measures would be a good place to start.

Dataset bias mitigation: Design standardised fairness metrics for all input datasets to assess for equal opportunity, equalised odds, and fairness through unawareness – coupled with data quality assessment.

  • Disparate impact ratio: The ratio of the rate of a favourable outcome for the unprivileged group to that of the privileged group.
  • Statistical parity difference: The difference in the rate of favourable outcomes received by the unprivileged group to the privileged group.
  • Attribute classification: Defining an exhaustive enterprise list of ‘protected attributes’ that can never be utilised in an algorithm viz. age, gender, ethnicity and other discriminating biased variables.

Modelling bias mitigation: Conduct regular automated model reviews, coupled with sporadic ‘human-in-the-loop’ detailed manual reviews of high impact algorithms and their techniques.

  • Never use an algorithm that cannot be explained, such as black-box algorithms
  • When a new algorithm is built, ensure it passes a customised model fairness test
  • Build automated controls testing and flag for attention when a model drifts off course.

Human confirmation bias mitigation: Implement an AI Fairness policy that details how algorithms are employed in critical scenarios, along with delineated responsibilities for humans and AI in each scenario.

  • Create an AI Ethics Charter with consultation from the wider enterprise to ensure distinct accountabilities and responsibilities are set for humans and AI, in applying an algorithm’s output.
  • Learn from global regulatory policies like the Institute of Electrical and Electronics Engineers, EU Regulations on AI, Chinese Regulations on AI, Australian Standards on Responsible AI, Human Rights Commission studies and more.
  • Using the data literacy program, increase awareness on the risks associated with using AI and algorithms, along with mitigation strategies and governance process around them.

Solving for AI Bias is not just a technical challenge

The first rule of mitigating AI bias is to understand that it is not at all about the data or tech or algorithms. Despite the best dataset bias mitigation tools and fairest algorithms, most of the ethics failures happen at the last mile – that is, the implementation. If humans are involved in the final decision making, it always makes for the hardest step of the process to control or mitigate. So any AI ethics conversation starts with the humans in the process and works backwards to technology solutions.

Assuming enough and more awareness is created, and humans all sign up to do the right thing and swear by the new AI Hippocratic Oath, then the first port of call is not ‘explainable AI’, as all experts would like you to believe. It’s about understanding the bias in the data we use. Even if we have the most explainable and fairest of AI algorithms, if the input data is biased and is unfair, then the eventual outcome is unfair.

The first question all data and business teams need to ask after committing to do the right thing is: How clean/fair is my data?

Bringing fairness to life via powerful forums

While the mitigating factors are above control for the technical part of the algorithms, it is still imperative for organisations to have real-life forums that discuss, debate and decide which algorithms can eventually be used. And critically, organisations need to stand by their ethical choices, just like any other ethical decision-making process involving finances, people or technology. Some examples could be:

AI oversight forums

In this age of pervasive machine learning and AI applications across organisations, there is an imperative need for ‘AI oversight forums’ to account for AI bias and fairness in their key algorithmic decisions. While the accountability of building fair AI systems will rest with data and machine learning teams, oversight and guidance on fairness measures and bias mitigation will need to be driven by these oversight groups, composed of a cross-section of the organisation.

It is worth remembering bringing stereotypes to the front and asking tough questions of the algorithm and its outputs is a fundamental expectation of these oversight forums. If there’s nothing to hide, then the data and algorithms should be able to either explain them or call out their opaqueness.

For example, when a marketing algorithm identifies a cohort of people most likely to be targeted for their next luxury item purchase and analysis shows almost all of them are women, someone should ask the question: "Are we unfairly targeting women or is this a genuine need for them? What does the data tell us about women’s needs that justifies this?". It’s then important to log these decisions for everyone to know and follow. These forums are even more critical than the tech/data measures.

AI ethics charters

Charters always work because nothing can trump the clarity of the written word and the mandates they lay down for a particular domain. AI fairness and ethics is one such domain and it would be good to have a dedicated charter on AI systems that talk about a right to reasonable inferences, transparency and explainability in critical decision making, and market and personal impact assessment of algorithms used by critical decisioning systems. This can be driven by the oversight forums.

For example, The Australian Government has done some pioneering work in defining solid starting principles in Australia's Artificial Intelligence Ethics Framework, along with real-world use cases from several leading Australian organisations.


Bias in our algorithms pose a significant threat not only to customers or other people for whom the decision is made, but also offer legal risks and liabilities for organisations. Ensuring fairness in our AI systems will be an ongoing process, with a combination of technology, process and people changes needed to make them successful. It is recommended that organisations:

  • Build fairness and bias metrics to account for dataset bias mitigation – and implement them as part of the current data discovery and cataloguing process.
  • Define and run a Model Governance processes to account for new age algorithms and implement the ‘human-in-the-loop’ aspect of evaluating key AI algorithms.
  • Create an AI Ethics Framework with inputs from the best regulations in the world on responsible AI and contribute to the upcoming Australian regulations on Responsible AI.
  • Integrate AI governance as part of Audit oversight functions and enable the organisational leadership to monitor AI fairness measures in key decisioning systems.

Tags: digital marketing, data-driven marketing, data strategy

Show Comments

Latest Whitepapers

State of the CMO 2021

CMO’s State of the CMO is an annual industry research initiative aimed at gauging how ...

More whitepapers

Latest Videos

More Videos

More Brand Posts

As an ex employee of 4 years during the growth of the company, I can say that the new management has benefitted the company tremendously,...


How JobAdder's CMO is bringing the human truth to B2B rebranding

Read more

So many words, so little business benefit.

Brett Iredale

How JobAdder's CMO is bringing the human truth to B2B rebranding

Read more

This article highlights Gartner’s latest digital experience platforms report and how they are influencing content operations ecosystems. ...

vikram Roy

Gartner 2022 Digital Experience Platforms reveals leading vendor players

Read more

Ms Bennett joined in 2017 yet this article states she waited until late 2020 to initiate a project to update the website. The solution t...

Munstar Cook

How Super SA put customers at the heart of its digital transformation

Read more

Its a informative post thanks for sharing

Galaxy Education

The people and process smarts needed to excel in omnichannel retailing

Read more

Blog Posts

The 15 most-common persuasion mistakes

As workers across the country slowly head back to the office, many of us might have forgotten how to deal with one another in person and the best way to persuade someone in a working environment.

Michelle Bowden

Author, consultant

5 commonly missed opportunities when marketing to multicultural customers

The latest census data shows Australia has become a majority migrant nation for the first time. According to the new national data, more than 50 per cent of residents were born overseas or have a migrant parent.

Mark Saba

Founder and CEO, Lexigo

Post-Pandemic Business Playbook: An Opportunity Unlike Ever Before

Covid-19 created a shift in the customer and economic-based reality unlike anything most of us have ever experienced. Customers have changed from routinised pre-pandemic shopping behaviours to frequently purchasing new brands and suppliers.

Ofer Mintz

Associate Professor of Marketing, UTS Business School

Sign in