AI Ethics Part 2: Mitigating bias in our algorithms

Kshira Saagar

  • Chief data officer, Latitude Financial Services
Kshira Saagar (the K is silent like in a Knight) is currently the Chief Data Officer of Latitude Financial Services and has spent 22.3% of his life helping key decision-makers and CxOs make smarter decisions using data, and strongly believes that every organisation can become truly data-driven. Outside work, Kshira spends a lot of time on advancing data literacy initiatives for high school and undergrad students.


As we start to rely more on AI to make critical business decisions, it’s essential to ensure they are ethically made and are free from unjust biases.

In first part of this article series, we explored the various forms of AI bias, ways to understand and identify them. This second part will cover various tangible measures that can be undertaken to control, mitigate or remove these biases.

It is worth noting that while all these measures are effective, not all organisations need to attempt to employ all of them at one go. Depending on the organisation’s AI maturity and adoption status, each of these mitigation measures can be deployed as needed.

Controlling for AI bias through tangible measures

A quick recap to the three kinds of biases that exist in our algorithms. These are: Dataset bias, Modelling bias and Human Confirmation bias. To mitigate these three different types of algorithm biases, the following measures would be a good place to start.

Dataset bias mitigation: Design standardised fairness metrics for all input datasets to assess for equal opportunity, equalised odds, and fairness through unawareness – coupled with data quality assessment.

  • Disparate impact ratio: The ratio of the rate of a favourable outcome for the unprivileged group to that of the privileged group.
  • Statistical parity difference: The difference in the rate of favourable outcomes received by the unprivileged group to the privileged group.
  • Attribute classification: Defining an exhaustive enterprise list of ‘protected attributes’ that can never be utilised in an algorithm viz. age, gender, ethnicity and other discriminating biased variables.

Modelling bias mitigation: Conduct regular automated model reviews, coupled with sporadic ‘human-in-the-loop’ detailed manual reviews of high impact algorithms and their techniques.

  • Never use an algorithm that cannot be explained, such as black-box algorithms
  • When a new algorithm is built, ensure it passes a customised model fairness test
  • Build automated controls testing and flag for attention when a model drifts off course.

Human confirmation bias mitigation: Implement an AI Fairness policy that details how algorithms are employed in critical scenarios, along with delineated responsibilities for humans and AI in each scenario.

  • Create an AI Ethics Charter with consultation from the wider enterprise to ensure distinct accountabilities and responsibilities are set for humans and AI, in applying an algorithm’s output.
  • Learn from global regulatory policies like the Institute of Electrical and Electronics Engineers, EU Regulations on AI, Chinese Regulations on AI, Australian Standards on Responsible AI, Human Rights Commission studies and more.
  • Using the data literacy program, increase awareness on the risks associated with using AI and algorithms, along with mitigation strategies and governance process around them.

Solving for AI Bias is not just a technical challenge

The first rule of mitigating AI bias is to understand that it is not at all about the data or tech or algorithms. Despite the best dataset bias mitigation tools and fairest algorithms, most of the ethics failures happen at the last mile – that is, the implementation. If humans are involved in the final decision making, it always makes for the hardest step of the process to control or mitigate. So any AI ethics conversation starts with the humans in the process and works backwards to technology solutions.

Assuming enough and more awareness is created, and humans all sign up to do the right thing and swear by the new AI Hippocratic Oath, then the first port of call is not ‘explainable AI’, as all experts would like you to believe. It’s about understanding the bias in the data we use. Even if we have the most explainable and fairest of AI algorithms, if the input data is biased and is unfair, then the eventual outcome is unfair.

The first question all data and business teams need to ask after committing to do the right thing is: How clean/fair is my data?

Bringing fairness to life via powerful forums

While the mitigating factors are above control for the technical part of the algorithms, it is still imperative for organisations to have real-life forums that discuss, debate and decide which algorithms can eventually be used. And critically, organisations need to stand by their ethical choices, just like any other ethical decision-making process involving finances, people or technology. Some examples could be:

AI oversight forums

In this age of pervasive machine learning and AI applications across organisations, there is an imperative need for ‘AI oversight forums’ to account for AI bias and fairness in their key algorithmic decisions. While the accountability of building fair AI systems will rest with data and machine learning teams, oversight and guidance on fairness measures and bias mitigation will need to be driven by these oversight groups, composed of a cross-section of the organisation.

It is worth remembering bringing stereotypes to the front and asking tough questions of the algorithm and its outputs is a fundamental expectation of these oversight forums. If there’s nothing to hide, then the data and algorithms should be able to either explain them or call out their opaqueness.

For example, when a marketing algorithm identifies a cohort of people most likely to be targeted for their next luxury item purchase and analysis shows almost all of them are women, someone should ask the question: "Are we unfairly targeting women or is this a genuine need for them? What does the data tell us about women’s needs that justifies this?". It’s then important to log these decisions for everyone to know and follow. These forums are even more critical than the tech/data measures.

AI ethics charters

Charters always work because nothing can trump the clarity of the written word and the mandates they lay down for a particular domain. AI fairness and ethics is one such domain and it would be good to have a dedicated charter on AI systems that talk about a right to reasonable inferences, transparency and explainability in critical decision making, and market and personal impact assessment of algorithms used by critical decisioning systems. This can be driven by the oversight forums.

For example, The Australian Government has done some pioneering work in defining solid starting principles in Australia's Artificial Intelligence Ethics Framework, along with real-world use cases from several leading Australian organisations.

Summary

Bias in our algorithms pose a significant threat not only to customers or other people for whom the decision is made, but also offer legal risks and liabilities for organisations. Ensuring fairness in our AI systems will be an ongoing process, with a combination of technology, process and people changes needed to make them successful. It is recommended that organisations:

  • Build fairness and bias metrics to account for dataset bias mitigation – and implement them as part of the current data discovery and cataloguing process.
  • Define and run a Model Governance processes to account for new age algorithms and implement the ‘human-in-the-loop’ aspect of evaluating key AI algorithms.
  • Create an AI Ethics Framework with inputs from the best regulations in the world on responsible AI and contribute to the upcoming Australian regulations on Responsible AI.
  • Integrate AI governance as part of Audit oversight functions and enable the organisational leadership to monitor AI fairness measures in key decisioning systems.

Tags: digital marketing, data-driven marketing, data strategy

Show Comments

Latest Whitepapers

More whitepapers

Latest Videos

More Videos

More Brand Posts

What are Chris Riddell's qualifications to talk about technology? What are the awards that Chris Riddell has won? I cannot seem to find ...

Tareq

Digital disruption isn’t disruption anymore: Why it’s time to refocus your business

Read more

Enterprisetalk

Mark

CMO's top 10 martech stories for the week - 9 June

Read more

Great e-commerce article!

Vadim Frost

CMO’s State of CX Leadership 2022 report finds the CX striving to align to business outcomes

Read more

Are you searching something related to Lottery and Lottery App then Agnito Technologies can be a help for you Agnito comes out as a true ...

jackson13

The Lottery Office CEO details journey into next-gen cross-channel campaign orchestration

Read more

Thorough testing and quality assurance are required for a bug-free Lottery Platform. I'm looking forward to dependability.

Ella Hall

The Lottery Office CEO details journey into next-gen cross-channel campaign orchestration

Read more

Blog Posts

Marketing prowess versus the enigma of the metaverse

Flash back to the classic film, Willy Wonka and the Chocolate Factory. Television-obsessed Mike insists on becoming the first person to be ‘sent by Wonkavision’, dematerialising on one end, pixel by pixel, and materialising in another space. His cinematic dreams are realised thanks to rash decisions as he is shrunken down to fit the digital universe, followed by a trip to the taffy puller to return to normal size.

Liz Miller

VP, Constellation Research

Why Excellent Leadership Begins with Vertical Growth

Why is it there is no shortage of leadership development materials, yet outstanding leadership is so rare? Despite having access to so many leadership principles, tools, systems and processes, why is it so hard to develop and improve as a leader?

Michael Bunting

Author, leadership expert

More than money talks in sports sponsorship

As a nation united by sport, brands are beginning to learn money alone won’t talk without aligned values and action. If recent events with major leagues and their players have shown us anything, it’s the next generation of athletes are standing by what they believe in – and they won’t let their values be superseded by money.

Simone Waugh

Managing Director, Publicis Queensland

Sign in