Skip to main content

Clearview AI ordered to delete all facial recognition data belonging to Australians

Clearview AI ordered to delete all facial recognition data belonging to Australians

/

The company breached Australian privacy law

Share this story

Illustrative art of a person with facial tracking guides imposed over their face.

Controversial facial recognition firm Clearview AI has been ordered to destroy all images and facial templates belonging to individuals living in Australia by the country’s national privacy regulator.

Clearview, which claims to have scraped 10 billion images of people from social media sites in order to identify them in other photos, sells its technology to law enforcement agencies. It was trialled by the Australian Federal Police (AFP) between October 2019 and March 2020.

Now, following an investigation, Australia privacy regulator, the Office of the Australian Information Commissioner (OAIC), has found that the company breached citizens’ privacy. “The covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” said OAIC privacy commissioner Angelene Falk in a press statement. “It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database.”

“When Australians use social media or professional networking sites, they don’t expect their facial images to be collected without their consent”

Said Falk: “When Australians use social media or professional networking sites, they don’t expect their facial images to be collected without their consent by a commercial entity to create biometric templates for completely unrelated identification purposes. The indiscriminate scraping of people’s facial images, only a fraction of whom would ever be connected with law enforcement investigations, may adversely impact the personal freedoms of all Australians who perceive themselves to be under surveillance.”

The investigation into Clearview’s practices by the OAIC was carried out in conjunction with the UK’s Information Commissioner’s Office (ICO). However, the ICO has yet to make a decision about the legality of Clearview’s work in the UK. The agency says it is “considering its next steps and any formal regulatory action that may be appropriate under the UK data protection laws.”

As reported by The Guardian, Clearview itself intends to appeal the decision. “Clearview AI operates legitimately according to the laws of its places of business,” Mark Love, a lawyer for the firm BAL Lawyers representing Clearview, told the publication. “Not only has the commissioner’s decision missed the mark on the manner of Clearview AI’s manner of operation, the commissioner lacks jurisdiction.”

Clearview argues that the images it collected were publicly available, so no breach of privacy occurred, and that they were published in the US, so Australian law does not apply.

Around the world, though, there is growing discontent with the spread of facial recognition systems, which threaten to eliminate anonymity in public spaces. Yesterday, Facebook parent company Meta announced it was shutting down the social platform’s facial recognition feature and deleting the facial templates it created for the system. The company cited “growing concerns about the use of this technology as a whole.” Meta also recently paid a $650 million settlement after the tech was found to have breached privacy laws in Illinois in the US.