CMO

Spotlight on facial recognition after IBM, Amazon and Microsoft bans

As some big tech players stop selling facial recognition to law enforcement over racial bias, where does it leave the wider use of this controversial technology?

IBM was the first big tech outfit to announce it is closing down its facial recognition system amid the Black Lives Matter racial discrimination protests in the US and around the world. Since then, Amazon and Microsoft have followed suit, with all three wanting to no longer provide the technology for policing in the US.

As influential players in the technology landscape, these decisions to step away from facial recognition have turned the spotlight on the use of this controversial technology in policing and wider society. Facial recognition technology has come under fire for racial bias due to inaccurate facial matches, particularly when it comes to the faces of people of colour. 

Gartner research director, Nick Ingelbrecht, told CMO the IBM decision alone does not make much difference because it was not a significant player in the facial recognition market nor the physical security market. Nevertheless, it's part of a larger shift away from the intrusive, problematic technology.

“Large tech companies like Google, AWS, Microsoft, IBM and others have come to recognise the reputational risks associated with the use of their technology in controversial circumstances and are increasingly steering clear of facial recognition,” he said.

In turn, this raises significant questions about the potential applications of facial recognition, along with other pervasive and potentially gender, racial or other forms of biased technologies and data sets for future consumer use cases.

Questioning the IBM, Amazon moves

IBM CEO, Arvind Krishna, in a letter to the US Congress on the issue of racial justice reform, said the company opposes technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose not consistent with its values of trust and transparency. 

“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies,” Krishna wrote.

“Artificial Intelligence is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of Al systems have a shared responsibility to ensure Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported.”

“Finally, national policy also should encourage and advance uses of technology that bring greater transparency and accountability to policing, such as body cameras and modern data analytics techniques.”

For its part, Amazon announced a self-imposed one-year “moratorium” on the use by policing outfits of its Rekognition facial recognition platform, although it will continue to provide the technology to other organisations. Perhaps wanting to highlight how the face matching smarts can be used for good, Amazon said Thorn, the International Center for Missing and Exploited Children, and Marinus Analytics will still be able to use Rekognition “to help rescue human trafficking victims and reunite missing children with their families”.

In its brief blog post, Amazon said it had “advocated governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge.

“We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested,” the company stated.

Michael Connor, executive director of Open MIC, a non-profit that works with shareholders to foster corporate accountability in the tech sector, welcomed the news after years of shareholders’ organising to push Amazon to end what was described as "sales of harmful, unregulated technology to police”.

“But it’s only a temporary moratorium, and it doesn’t address deeper concerns that shareholders have regarding Amazon’s role in a rapidly-developing surveillance economy,” he said.

OpenMic said just two weeks ago, the company’s independent shareholders voted in favour by 40 per cent for two separate resolutions that raised serious concerns regarding the civil and human rights violations posed by Rekognition as well as Amazon’s Ring doorbell technology and other surveillance products.

Connor also pointed to influential work by researchers at MIT, which found Amazon's Rekognition demonstrates gender and racial bias, and is far more likely to misidentify women and people with dark skin than white men. 

“It’s difficult to reconcile the company’s support for regulation - and acknowledgement of the potential negative, discriminatory impacts of the technology - with its willingness to risk these impacts by selling Rekognition in an unregulated environment," Connor told the company’s senior management and board members.

Algorithmic Justice League founder and MIT researcher, Joy Buolamwini, commended IBM’s decision, but called for systematic change.

“It is welcome recognition that facial recognition technology, especially as deployed by police, has been used to undermine human rights, and to harm Black people specifically, as well as Indigenous people and other People of Color (BIPOC),”  she wrote last week.

The Algorithmic Justice League has also called on companies that continue to provide facial recognition technologies to sign the Safe Face Pledge, a mechanism they developed for organisations to mitigate the abuse of facial recognition and analysis technology. The pledge prohibits lethal use of the technology, lawless police use, and requires transparency in any government use.

Meanwhile, Microsoft said it does not and will not sell its facial recognition technology to law enforcement until there is a nationwide law in the US to regulate how it’s used. Microsoft president, Brad Smith, told The Washington Post, “we need Congress to act, not just tech companies alone, that is the only way that we will guarantee that we will protect the lives of people".

In Australia, NEC’s facial recognition technology is used in both government and commercial sectors for building access, personal verification and licence and documentation fraud, rather than law enforcement. Worldwide, it has more than 1,000 biometric systems in 70 countries and regions, and a spokesperson told CMO it has gone to great lengths to ensure its facial recognition algorithms are accurate across racial and other demographic groups. 

The company said it has core values, including AI and human rights principles, and is committed to helping to end racial injustice in our society. “A brighter world will never exist while systemic racism and other forms of social injustice continue to oppress the Black and other marginalised communities.”

“NEC remains committed to the responsible use of our technologies in support of public safety and social justice. We believe that public safety agencies should be able to use advanced facial recognition and other innovative technologies to help correct inherent biases, protect privacy and civil liberties, and fairly and effectively conduct investigations for social justice.”

But it's not just the big tech players feeling the heat on using facial recognition. Only recently, Clearview AI was compelled to stop offering its technology to non-police and private organisations after it was revealed its database includes images scraped from social media platforms against their terms of service. However, Clearview AI continues to sell its technology to law enforcement organisations and The New York Times reports over 600 US law enforcement agencies are using its technology.  Clearview AI allegedly continues to ignore requests from a US senator for independent testing, including bias and accuracy evaluations, of its platform. 

At the same time, the European Data Protection Board has warned Clearview AI’s technology is likely to be illegal in Europe and Google, Facebook and Twitter have sent cease and desist letters to Clearview about its image scraping via their digital platforms.

Another Web-based facial recognition outfit, PimEyes, has also come under fire for the way its technology is open to abuse. The platform enables users to upload a facial image and search the internet for other images of the same person.

The UK’s Big Brother Watch told the BBC the “powerful surveillance tech marketed to individuals is chilling”. PimEyes said it only crawls websites which allow public scraping in their terms of service.

Should facial recognition be retired for good?

In the wake of the focus on privacy and surveillance issues with facial recognition technology, some experts have said rules and guidelines won’t offer real protection from this invasive technology. Monash University research fellow in the emerging technologies research lab, Dr Jathan Sadowski, said, while moratoriums and stronger regulation is a step in the right direction, policies for banning facial recognition altogether are needed. 

“It's much easier and more ethical to ban facial recognition than it is to try to create ‘best practices’ for ‘ethical use’ of technology as dangerous as facial recognition,” Sadowski said.

“Amazon does, and still will, provide facial recognition services, as well as other surveillance infrastructure, to a wide range of non-policing organisations. So this moratorium is only a temporary pause for one of its major institutional users.

“For justice activists and regulatory advocates, I see this not so much as winning a battle, but as an opportunity to go on the offensive and actually extract real and lasting victories from the policing-industrial complex.”

Follow CMO on Twitter: @CMOAustralia, take part in the CMO conversation on LinkedIn: CMO ANZ, follow our regular updates via CMO Australia's Linkedin company page, or join us on Facebook: https://www.facebook.com/CMOAustralia.