AI Ethics Part 2: Mitigating bias in our algorithms

Kshira Saagar

  • Chief data officer, Latitude Financial Services
Kshira Saagar (the K is silent like in a Knight) is currently the Chief Data Officer of Latitude Financial Services and has spent 22.3% of his life helping key decision-makers and CxOs make smarter decisions using data, and strongly believes that every organisation can become truly data-driven. Outside work, Kshira spends a lot of time on advancing data literacy initiatives for high school and undergrad students.

As we start to rely more on AI to make critical business decisions, it’s essential to ensure they are ethically made and are free from unjust biases.

In first part of this article series, we explored the various forms of AI bias, ways to understand and identify them. This second part will cover various tangible measures that can be undertaken to control, mitigate or remove these biases.

It is worth noting that while all these measures are effective, not all organisations need to attempt to employ all of them at one go. Depending on the organisation’s AI maturity and adoption status, each of these mitigation measures can be deployed as needed.

Controlling for AI bias through tangible measures

A quick recap to the three kinds of biases that exist in our algorithms. These are: Dataset bias, Modelling bias and Human Confirmation bias. To mitigate these three different types of algorithm biases, the following measures would be a good place to start.

Dataset bias mitigation: Design standardised fairness metrics for all input datasets to assess for equal opportunity, equalised odds, and fairness through unawareness – coupled with data quality assessment.

  • Disparate impact ratio: The ratio of the rate of a favourable outcome for the unprivileged group to that of the privileged group.
  • Statistical parity difference: The difference in the rate of favourable outcomes received by the unprivileged group to the privileged group.
  • Attribute classification: Defining an exhaustive enterprise list of ‘protected attributes’ that can never be utilised in an algorithm viz. age, gender, ethnicity and other discriminating biased variables.

Modelling bias mitigation: Conduct regular automated model reviews, coupled with sporadic ‘human-in-the-loop’ detailed manual reviews of high impact algorithms and their techniques.

  • Never use an algorithm that cannot be explained, such as black-box algorithms
  • When a new algorithm is built, ensure it passes a customised model fairness test
  • Build automated controls testing and flag for attention when a model drifts off course.

Human confirmation bias mitigation: Implement an AI Fairness policy that details how algorithms are employed in critical scenarios, along with delineated responsibilities for humans and AI in each scenario.

  • Create an AI Ethics Charter with consultation from the wider enterprise to ensure distinct accountabilities and responsibilities are set for humans and AI, in applying an algorithm’s output.
  • Learn from global regulatory policies like the Institute of Electrical and Electronics Engineers, EU Regulations on AI, Chinese Regulations on AI, Australian Standards on Responsible AI, Human Rights Commission studies and more.
  • Using the data literacy program, increase awareness on the risks associated with using AI and algorithms, along with mitigation strategies and governance process around them.

Solving for AI Bias is not just a technical challenge

The first rule of mitigating AI bias is to understand that it is not at all about the data or tech or algorithms. Despite the best dataset bias mitigation tools and fairest algorithms, most of the ethics failures happen at the last mile – that is, the implementation. If humans are involved in the final decision making, it always makes for the hardest step of the process to control or mitigate. So any AI ethics conversation starts with the humans in the process and works backwards to technology solutions.

Assuming enough and more awareness is created, and humans all sign up to do the right thing and swear by the new AI Hippocratic Oath, then the first port of call is not ‘explainable AI’, as all experts would like you to believe. It’s about understanding the bias in the data we use. Even if we have the most explainable and fairest of AI algorithms, if the input data is biased and is unfair, then the eventual outcome is unfair.

The first question all data and business teams need to ask after committing to do the right thing is: How clean/fair is my data?

Bringing fairness to life via powerful forums

While the mitigating factors are above control for the technical part of the algorithms, it is still imperative for organisations to have real-life forums that discuss, debate and decide which algorithms can eventually be used. And critically, organisations need to stand by their ethical choices, just like any other ethical decision-making process involving finances, people or technology. Some examples could be:

AI oversight forums

In this age of pervasive machine learning and AI applications across organisations, there is an imperative need for ‘AI oversight forums’ to account for AI bias and fairness in their key algorithmic decisions. While the accountability of building fair AI systems will rest with data and machine learning teams, oversight and guidance on fairness measures and bias mitigation will need to be driven by these oversight groups, composed of a cross-section of the organisation.

It is worth remembering bringing stereotypes to the front and asking tough questions of the algorithm and its outputs is a fundamental expectation of these oversight forums. If there’s nothing to hide, then the data and algorithms should be able to either explain them or call out their opaqueness.

For example, when a marketing algorithm identifies a cohort of people most likely to be targeted for their next luxury item purchase and analysis shows almost all of them are women, someone should ask the question: "Are we unfairly targeting women or is this a genuine need for them? What does the data tell us about women’s needs that justifies this?". It’s then important to log these decisions for everyone to know and follow. These forums are even more critical than the tech/data measures.

AI ethics charters

Charters always work because nothing can trump the clarity of the written word and the mandates they lay down for a particular domain. AI fairness and ethics is one such domain and it would be good to have a dedicated charter on AI systems that talk about a right to reasonable inferences, transparency and explainability in critical decision making, and market and personal impact assessment of algorithms used by critical decisioning systems. This can be driven by the oversight forums.

For example, The Australian Government has done some pioneering work in defining solid starting principles in Australia's Artificial Intelligence Ethics Framework, along with real-world use cases from several leading Australian organisations.


Bias in our algorithms pose a significant threat not only to customers or other people for whom the decision is made, but also offer legal risks and liabilities for organisations. Ensuring fairness in our AI systems will be an ongoing process, with a combination of technology, process and people changes needed to make them successful. It is recommended that organisations:

  • Build fairness and bias metrics to account for dataset bias mitigation – and implement them as part of the current data discovery and cataloguing process.
  • Define and run a Model Governance processes to account for new age algorithms and implement the ‘human-in-the-loop’ aspect of evaluating key AI algorithms.
  • Create an AI Ethics Framework with inputs from the best regulations in the world on responsible AI and contribute to the upcoming Australian regulations on Responsible AI.
  • Integrate AI governance as part of Audit oversight functions and enable the organisational leadership to monitor AI fairness measures in key decisioning systems.

Tags: digital marketing, data-driven marketing, data strategy

Show Comments

Latest Whitepapers

State of the CMO 2021

CMO’s State of the CMO is an annual industry research initiative aimed at gauging how ...

More whitepapers

Latest Videos

More Videos

Focus on your customer experience not your NPS score. Fix the fucking problems and the customer support requests will go away.I currently...

Chris B

Bringing community thinking to Optus' customer service team

Read more

Nice blog!Blog is really informative , valuable.keep updating us with such amazing blogs.influencer agency in Melbourne

Rajat Kumar

Why flipping Status Quo Bias is the key to B2B marketing success

Read more

good this information are very helpful for millions of peoples customer loyalty Consultant is an important part of every business.

Tom Devid

Report: 4 ways to generate customer loyalty

Read more

Great post, thanks for sharing such a informative content.

CodeWare Limited

APAC software company brings on first VP of growth

Read more

This article highlights Gartner’s latest digital experience platforms report and how they are influencing content operations ecosystems. ...

vikram Roy

Gartner 2022 Digital Experience Platforms reveals leading vendor players

Read more

Blog Posts

From unconscious to reflective: What level of data user are you?

Using data is a hot topic right now. Leaders are realising data can no longer just be the responsibility of dedicated analysts or staff with ‘data’ in their title or role description.

Dr Selena Fisk

Data expert, author

Whose responsibility is it to set the ground rules for agency collaboration?

It’s not that your agencies don’t have your best interests at heart – most of them do. But the only way to ensure they’re 100 per cent focused on your business and not growing theirs by scope creep is by setting the guard rails for healthy agency collaboration.

Andrew Pascoe

Head of planning, Hatched

AI Ethics Part 2: Mitigating bias in our algorithms

In first part of this article series, we explored the various forms of AI bias, ways to understand and identify them. This second part will cover various tangible measures that can be undertaken to control, mitigate or remove these biases.

Kshira Saagar

Chief data officer, Latitude Financial Services

Sign in