Facial recognition use found in breach of Australian privacy law
- 03 November, 2021 16:47
Facial recognition companies and brands endeavouring to use the technology have been put on notice by Australia’s privacy commissioner after two rulings determined such data collection to be a breach of consumer privacy.
The Office of the Australian Information Commissioner (OAIC) today found Clearview AI in breach of privacy legislation for collecting images and biometric information from individuals in Australia without their consent and for the purposes of commercial gain.
A joint investigation launched into the US-based company by the Office of the Australian Information Commissioner (OAIC) and the UK Information Commission’s Officer (ICO) commenced in March 2020 found the facial recognition technology company to be in breach of privacy legislation for collecting Australians’ sensitive information without their consent.
US-based Clearview has created a facial recognition tool that’s been used to build a database of more than 3 billion images globally. To do this, the company takes data from social media platforms and publicly available websites, then links to where the photos appeared for identification purposes.
In its determination, the OAIC found Clearview collected personal information by unfair means, did not taking reasonable steps to notify individuals that such personal information was collected, and did not take reasonable steps to ensure information disclosed was accurate. Clearview was also found to have breached the Act by not taking reasonable steps to implement practices, procedures and systems to ensure compliance with Australian Privacy Principles.
The company had collected data between October 2019 and March 2020 as part of a trial of its facial recognition tool with several Australian police forces, which then used the images to conduct searches. The users included the Queensland, Victorian, South Australian and Australian Federal Police forces.
The OAIC is also finalising an investigation into the Australian Federal Police’s trial use of the technology and whether it complied with requirements under the Australian Government Agencies Privacy Code to assess and mitigate privacy risks.
Clearview has been ordered to cease collecting facial images and biometric templates, as well as destroy any Australian biometric information collected within 90 days.
In its determination, the OAIC noted the lack of transparency around Clearview AI’s collection practices, the monetisation of individuals’ data for a purpose entirely outside reasonable expectations, and the risk of adversity to people whose images are included in their database. Commissioner, Angelene Falk, also found the privacy impacts of the facial recognition system were not necessary, legitimate and proportionate against public interest benefits.
“The covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” Falk stated. “It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database.
“By its nature, this biometric identity information cannot be reissued or cancelled and may also be replicated and used for identity theft. Individuals featured in the database may also be at risk of misidentification.
“The indiscriminate scraping of people’s facial images, only a fraction of whom would ever be connected with law enforcement investigations, may adversely impact the personal freedoms of all Australians who perceive themselves to be under surveillance.”
In its defence, Clearview AI had argued the information handled was not personal information and that, as a company based in the US, it was not within the Privacy Act’s jurisdiction. Clearview also claimed it stopped offering its services to Australian law enforcement shortly after the OAIC’s investigation began.
However, the OAIC clearly disagreed. In the determination, it also noted Clearview had temporarily provided an opt-out process for residents in January 2020 requiring individuals to supply an email address and image verification. This form had been removed during the course of OAIC investigations.
“Consent may not be implied if an individual’s intent is ambiguous or there is reasonable doubt about the individual’s intention,” Falk stated in the determination. “I consider that the act of uploading an image to a social media site does not unambiguously indicate agreement to collection of that image by an unknown third party for commercial purposes. In fact, this expectation is actively discouraged by many social media companies’ public-facing policies, which generally prohibit third parties from scraping their users’ data.
“Moreover, consent could certainly not be inferred where an individual’s image is uploaded by another individual [including individuals depicted in the background of a scraped image] or where an individual inadvertently posts content on a social media website without changing the public default settings.
Falk said Clearview AI’s activities in Australia involve the automated and repetitious collection of sensitive biometric information from Australians on a large scale, for profit.
“The company’s patent application also demonstrates the capability of the technology to be used for other purposes such as dating, retail, dispensing social benefits and granting or denying access to a facility, venue or device,” she said.
“This case reinforces the need to strengthen protections through the current review of the Privacy Act, including restricting or prohibiting practices such as data scraping personal information from online platforms.
“It also raises questions about whether online platforms are doing enough to prevent and detect scraping of personal information.”
The OAIC said the ICO is still considering next steps and potential formal regulatory action under UK data protection laws. The joint investigation was conducted under the Global Privacy Assembly's Global Cross Border Enforcement Cooperation Arrangement and the MOU between the OAIC and ICO.
OAIC also finds against 7-Eleven for facial recognition use
The OAIC’s decision on Clearview AI comes just a few weeks after it also ruled 7-Eleven had interfered with consumer privacy by employing facial recognition while surveying customers about their in-store experiences.
The surveys were completed between June 2020 and August 2021 on tablets with built-in cameras installed in 700 stores. Customers completed 1.6 million surveys in the first 10 months.
The determination found individuals did not give either express or implied consent to the collection of their facial images or faceprints. The OAIC said 7-Eleven also failed to take reasonable steps to notify individuals of the collection of personal information.
The OAIC said large-scale collection of sensitive biometric information through 7-Eleven’s customer feedback mechanism was not considered reasonably necessary for the purpose of understanding and improving customers’ in-store experience.
In its determination, Falk classified facial images and faceprints as sensitive information covered by additional protections under the Privacy Act 1988 because they are ‘biometric information that was used for the purpose of automated biometric identification’.
“Biometric information is unique to an individual and cannot normally be changed,” Falk said. “Entities must carefully consider whether they need to collect this sensitive personal information, and whether the privacy impacts are proportional to achieving the entity’s legitimate functions or activities.
“While I accept that implementing systems to understand and improve customers’ experience is a legitimate function for 7-Eleven’s business, any benefits to the business in collecting this biometric information were not proportional to the impact on privacy.”
7-Eleven has since ceased collecting facial images and faceprints as part of the customer feedback mechanism. It has also destroyed existing facial images collected as requested by the commissioner.