AI tech founder urges business leaders, innovators to consider ethical responsibility

Flamingo AI founder and executive director, Dr Catriona Wallace, shares insight into the good and bad human implications of the world's next disruptive societal force

It is incumbent upon business leaders and Australian organisations to put diversity and the ethical implications of artificial intelligence (AI) at the heart of innovation if we’re to ensure the world’s third major disruptive force is harnessed for human good.

That was the big call-out made by Dr Catriona Wallace, founder and executive director of the ASX-listed machine learning tech innovator, Flamingo AI, during this week’s CeBIT conference in Sydney. Speaking on the rise of AI and the relationship between humans and machines, the entrepreneur highlighted several facts and figures on the extent of AI impact and innovation over the short and longer-term horizon, as well as the good and negative potential human consequences that come with it.

As outlined by Dr Wallace, disruptive technologies, such as AI, are predicted to be the third of three major problems the world is facing that could detrimentally affect humanity. The other two are climate change, and nuclear war. To emphasis the extent to which AI is proliferating, she pointed out US$38bn has been invested in AI the last 12 months, a figure due to grow 12-fold in the next five years.

AI is classified into five main categories: Deep learning, or neural network-based algorithms; natural language processing; computer vision or face recognition; big data; and robots and sensors. It’s also typically put into three pots: Narrow AI, which is highly sophisticated; general AI, which sits at about 90 per cent accuracy today; and super AI, which is expected in the next 30 years and represents robots and machines becoming faster and more capable of doing work and programming than humans.

In terms of AI for good, there are plenty of positive applications in market today, and Dr Wallace pointed to a range, including telematics devices capable of predicting when an individual is having a heart attack before they realise it; disability assistance tools like Seeing AI for those with seeing difficulties; and facial recognition-based tools to find missing children.

All illustrate there’s “extraordinary capability for human good” that comes with the application of AI, Dr Wallace said.

Yet it’s also a disruptive force. Within the next five years for instance, 40 per cent of service and administration jobs in industries such as financial services, utilities and insurance will be automated. In addition, 30 per cent of customer interactions in next three years will be automated or conducted by robots and machines, and 6.2 billion hours of work and productivity will be saved by robots or machines doing the work.

Dr Wallace also pointed out Gartner research suggests AI will eliminate 1.8 million jobs by 2020, while creating 2.3 million new one. Many job losses will be those held by women and minority groups. It’s also expected many of the 1.8 million will lack the skills or retraining opportunities to take up new jobs created.

Ethical conundrum

But it’s the ethical implications that arguably represent a more profound risk. According to Gartner predictions, in the next three years, 85 per cent of all AI projects will have an erroneous outcome through mistakes, errors, bias and things that go wrong.

“This is because we are moving so fast and such little regulation and guideline around this,” Dr Wallace said. “Entrepreneurs are five years ahead of government – by the time they catch up, we will have invented the next big thing it is a human-created problem.”

One of the big challenges is diversity and bias. As noted by Dr Wallace, 90 per cent of all coding of emerging technology today is done by men.

“This means we’re very challenged by the notion of diversity in terms, code, data and the outcomes of what these machines running our lives will do,” she said.

The Babylon heart attack app is a great example: By being trained on a certain data sets that detected symptoms for men of a heart attack and classified them as life threatening, the app reported the same symptoms in women as a panic attack.

“This is indicative of the inherent notion of bias in data that can have the most detrimental effect,” Dr Wallace said.  

Even in a Google search today you’ll find bias, and Dr Wallace encouraged the audiences to survey a few resulting images on unprofessional hairstyles and the best CEOs in the world.

“We are using data sets inherently bias to represent the world today. These are loaded into machines, we have men predominantly coding it, and those are the results,” she continued. “Children are using these apps to determine how they behave and role model. It’s a major problem.”

A step forward and one which Dr Wallace is involved in is the Australian Government and Human Rights Commission’s work to define the country’s first AI ethics and human framework. Announced earlier this year, the $29 million project has identified eight principles for helping AI innovation become more humane.

The eight principles are: Generate net benefits, do not harm, regulatory and legal compliance, privacy protection, fairness, transparency and explainability, contestability, and accountability.

“The most important thing we can think about going forward is how do we – as the government and industry leaders try to figure this out – address this [bias question]. It’s incumbent upon us as business leaders to start to lead this, to think about diversity, and how we can build ethical AI so this third major disruptive force upon the world is one we’re actively doing something about,” Dr Wallace said. “Be curious, listen as much as you can and then experiment – trial AI in your homes, apps, businesses.”

Dr Wallace also called for more innovation in team building. “We need to think more creatively about the teams and individuals leading projects and have minority groups, gender balance on teams,” she said. “These individuals are not necessarily doing the coding, but could be an ‘AI ethicist’ on the team.

“Then I implore you to help lead humanity through this potentially difficult time with great, ethical leadership.”  

Follow CMO on Twitter: @CMOAustralia, take part in the CMO conversation on LinkedIn: CMO ANZ, follow our regular updates via CMO Australia's Linkedin company page, or join us on Facebook: https://www.facebook.com/CMOAustralia. 

 

 

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments

Blog Posts

Building a human-curated brand

If the FANG (Facebook, Amazon, Netflix, Google) sector and their measured worth are the final argument for the successful 21st Century model, then they are beyond reproach. Fine-tuning masses of algorithms to reduce human touchpoints and deliver wild returns to investors—all with workforces infinitesimally small compared to the giants of the 20th Century—has been proven out.

Will Smith

Co-founder and head of new markets, The Plum Guide

Sustainability trends brands can expect in 2020

​Marketers have made strides this year in sustainability with the number of brands rallying behind the Not Business As Usual alliance for action against climate change being a sign of the times. While sustainability efforts have gained momentum this year, 2020 is shaping up to be the year brands are really held accountable for their work in this area.

Ben King

CSR manager & sustainability expert, Finder

The trouble with Scotty from Marketing

As a Marketer, the ‘Scotty from Marketing’ meme troubles me.

Natalie Robinson

Director of marketing and communications, Melbourne Polytechnic

It's a pretty interesting article to read. I will learn more about this company later.

Dan Bullock

40 staff and 1000 contracts affected as foodora closes its Australian operations

Read more

If you think it can benefit both consumer and seller then it would be great

Simon Bird

Why Ford is counting on the Internet of Things to drive customer engagement

Read more

It's a good idea. Customers really should control their data. Now I understand why it's important.

Elvin Huntsberry

Salesforce CMO: Modern marketers have an obligation to give customers control of their data

Read more

Instagram changes algorithms every time you get used to them. It really pisses me off. What else pisses me off? The fact that Instagram d...

Nickwood

Instagram loses the like in Australia; industry reacts positively

Read more

I tried www.analisa.io to see my Instagram Insight

Dina Rahmawati

7 marketing technology predictions for 2016

Read more

Latest Podcast

More podcasts

Sign in