AI tech founder urges business leaders, innovators to consider ethical responsibility
- 30 October, 2019 12:43
It is incumbent upon business leaders and Australian organisations to put diversity and the ethical implications of artificial intelligence (AI) at the heart of innovation if we’re to ensure the world’s third major disruptive force is harnessed for human good.
That was the big call-out made by Dr Catriona Wallace, founder and executive director of the ASX-listed machine learning tech innovator, Flamingo AI, during this week’s CeBIT conference in Sydney. Speaking on the rise of AI and the relationship between humans and machines, the entrepreneur highlighted several facts and figures on the extent of AI impact and innovation over the short and longer-term horizon, as well as the good and negative potential human consequences that come with it.
As outlined by Dr Wallace, disruptive technologies, such as AI, are predicted to be the third of three major problems the world is facing that could detrimentally affect humanity. The other two are climate change, and nuclear war. To emphasis the extent to which AI is proliferating, she pointed out US$38bn has been invested in AI the last 12 months, a figure due to grow 12-fold in the next five years.
AI is classified into five main categories: Deep learning, or neural network-based algorithms; natural language processing; computer vision or face recognition; big data; and robots and sensors. It’s also typically put into three pots: Narrow AI, which is highly sophisticated; general AI, which sits at about 90 per cent accuracy today; and super AI, which is expected in the next 30 years and represents robots and machines becoming faster and more capable of doing work and programming than humans.
In terms of AI for good, there are plenty of positive applications in market today, and Dr Wallace pointed to a range, including telematics devices capable of predicting when an individual is having a heart attack before they realise it; disability assistance tools like Seeing AI for those with seeing difficulties; and facial recognition-based tools to find missing children.
All illustrate there’s “extraordinary capability for human good” that comes with the application of AI, Dr Wallace said.
Yet it’s also a disruptive force. Within the next five years for instance, 40 per cent of service and administration jobs in industries such as financial services, utilities and insurance will be automated. In addition, 30 per cent of customer interactions in next three years will be automated or conducted by robots and machines, and 6.2 billion hours of work and productivity will be saved by robots or machines doing the work.
Dr Wallace also pointed out Gartner research suggests AI will eliminate 1.8 million jobs by 2020, while creating 2.3 million new one. Many job losses will be those held by women and minority groups. It’s also expected many of the 1.8 million will lack the skills or retraining opportunities to take up new jobs created.
But it’s the ethical implications that arguably represent a more profound risk. According to Gartner predictions, in the next three years, 85 per cent of all AI projects will have an erroneous outcome through mistakes, errors, bias and things that go wrong.
“This is because we are moving so fast and such little regulation and guideline around this,” Dr Wallace said. “Entrepreneurs are five years ahead of government – by the time they catch up, we will have invented the next big thing it is a human-created problem.”
One of the big challenges is diversity and bias. As noted by Dr Wallace, 90 per cent of all coding of emerging technology today is done by men.
“This means we’re very challenged by the notion of diversity in terms, code, data and the outcomes of what these machines running our lives will do,” she said.
The Babylon heart attack app is a great example: By being trained on a certain data sets that detected symptoms for men of a heart attack and classified them as life threatening, the app reported the same symptoms in women as a panic attack.
“This is indicative of the inherent notion of bias in data that can have the most detrimental effect,” Dr Wallace said.
Even in a Google search today you’ll find bias, and Dr Wallace encouraged the audiences to survey a few resulting images on unprofessional hairstyles and the best CEOs in the world.
“We are using data sets inherently bias to represent the world today. These are loaded into machines, we have men predominantly coding it, and those are the results,” she continued. “Children are using these apps to determine how they behave and role model. It’s a major problem.”
A step forward and one which Dr Wallace is involved in is the Australian Government and Human Rights Commission’s work to define the country’s first AI ethics and human framework. Announced earlier this year, the $29 million project has identified eight principles for helping AI innovation become more humane.
The eight principles are: Generate net benefits, do not harm, regulatory and legal compliance, privacy protection, fairness, transparency and explainability, contestability, and accountability.
“The most important thing we can think about going forward is how do we – as the government and industry leaders try to figure this out – address this [bias question]. It’s incumbent upon us as business leaders to start to lead this, to think about diversity, and how we can build ethical AI so this third major disruptive force upon the world is one we’re actively doing something about,” Dr Wallace said. “Be curious, listen as much as you can and then experiment – trial AI in your homes, apps, businesses.”
Dr Wallace also called for more innovation in team building. “We need to think more creatively about the teams and individuals leading projects and have minority groups, gender balance on teams,” she said. “These individuals are not necessarily doing the coding, but could be an ‘AI ethicist’ on the team.
“Then I implore you to help lead humanity through this potentially difficult time with great, ethical leadership.”
Follow CMO on Twitter: @CMOAustralia, take part in the CMO conversation on LinkedIn: CMO ANZ, follow our regular updates via CMO Australia's Linkedin company page, or join us on Facebook: https://www.facebook.com/CMOAustralia.