CMO

South by Southwest: The inconvenient truth in AI

Ogilvy's Jason Davey reports in from this year's SXSW event on the changing dialogue around artificial intelligence


This time last year, I wrote that advancements in technology will impact society, not just marketing. This year, that seems to be a strong flavour of many of the topics being presented at SXSW.

Artificial Intelligence (AI) is being discussed in every track of SXSW, across filmmaking, interactive and music. But this year the tone is different. The penny has dropped that without careful consideration, and a diversity of input, AI could fast become the worst version of humanity itself - racist, bigoted, gender-biased and exclusive.

The driver for this is the unconscious bias that exists in the vast history of documents and data AI uses to learn from. Personalised search algorithms based on AI often filter out the diversity of human ideas. And unfortunately, humans have an evolutionary predisposition to bad news; it’s helped us survive for thousands of years.

This is now having a profound impact on global society, driven by fake news, hate speech, trolling and unfiltered direct messaging.

From the mayor of London to the Chief Scientist of Machine Learning at Google, many leading figures have challenged the major technology developers to take responsibility for their impact on culture. 

As purveyors of global communications platforms, major players such as Facebook and Google are now recognised as having a significant impact on humanity, and it’s not all good. Their AI algorithms are not yet designed to reflect the diversity and interests of a global society, and in fact curate an experience based on your ‘interests’, no matter how extreme they may be. This is creating a narrow-minded culture of convention through the lack of diversity received in your news 'feed’. It’s not a feed, it’s tunnel vision.

This is exactly the opposite to the original intent and promise of the Web; openness, equality, global connectedness.

Almost limitless access to information means the world is more transparent than ever before. So brands need to clearly define and execute a purpose that is socially and culturally responsible. And select AI tools that reflect this.

Without doing so, AI-assisted decision making will not be forward-thinking, inclusive and diverse (and therefore better) - it will be ruthlessly reflective of the bias of the past. Society will not move forward, but move back to a worse version of itself.

AI will have a profound impact on our ethics, culture, economic welfare and of course, brands. We must all take responsibility in holding AI developers accountable for their design, ensuring diversity and inclusion in our AI-driven future.

It starts by understanding that this technology is not divorced from humanity, it’s reflective of it. We must become better at identifying bias and removing it - from our workplaces, our politics and our advertising.