Skip to main content

Controversial facial recognition firm Clearview AI facing legal claims after damning NYT report

Controversial facial recognition firm Clearview AI facing legal claims after damning NYT report


Clearview has reportedly been overstating the effectiveness of its product

Share this story

A series of wireframe faces.
Illustration by Alex Castro / Th

Clearview AI, an artificial intelligence firm providing facial recognition technology to US law enforcement, may be overstating how effective its services are in catching terrorist suspects and preventing attacks, according to a report from BuzzFeed News.

The company, which gained widespread recognition from a New York Times story published earlier this month, claims it was instrumental in identifying a New York suspect from video footage who had placed three rice cookers disguised as explosive devices around New York City last August, creating panic and setting off a citywide manhunt. BuzzFeed News found via a public records request that Clearview AI has been claiming in promotional material that law enforcement linked the suspect to an online profile in only five seconds using its database. But city police now say this is simply false.

Clearview falsely claimed it helped the NYPD catch a terrorism suspect last year

“The NYPD did not use Clearview technology to identify the suspect in the August 16th rice cooker incident,” an NYPD spokesperson told BuzzFeed News. “The NYPD identified the suspect using the Department’s facial recognition practice where a still image from a surveillance video was compared to a pool of lawfully possessed arrest photos.”

The NYPD now says it has no formal relationship with Clearview, despite the company’s claims otherwise both in the promotional material it’s using to pitch its technology around the country and even publicly on its website. Clearview CEO Hoan Ton-That now says the NYPD is using its technology “on a demo basis,” BuzzFeed reports.

In a blog post published on Thursday responding to criticism, Clearview claims it has rejected the idea it produce a public, consumer-facing facial recognition app that could be accessed by anyone.

“Clearview’s app is not available to the public. While many people have advised us that a public version would be more profitable, we have rejected the idea,” the post reads. “Clearview exists to help law enforcement agencies solve the toughest cases, and our technology comes with strict guidelines and safeguards to ensure investigators use it for its intended purpose only.”

Clearview has built out its database in part by scraping social media profiles

Clearview has quickly risen to the forefront of the national conversation around facial recognition technology — in particular, growing concern among activists and politicians over how it may be used to violate civil rights and whether it’s being adopted too quickly based on false or misleading claims about its effectiveness. Amazon, which makes a cloud-based facial recognition product called Rekognition, has also faced similar criticism for selling its technology to law enforcement despite repeated concerns from academics and activists who say it is flawed when used to try to identity darker-skinned and female individuals.

Clearview is also facing challenges from platforms in the wake of the NYT report. Twitter has sent Clearview a cease-and-desist letter demanding that the company stop scraping its platform for photos to include in its database. Twitter also demanded the company delete any existing data it may have obtained from the platform because using it to fill out a third-party database without user consent is against Twitter’s policies. Clearview has acknowledged publicly that it built out its database in part by scraping social media profiles.

Additionally, the New Jersey Office of the Attorney General has barred the state’s police departments from using Clearview, and sent a cease-and-desist to Clearview on Friday after the Department of Law and Public Safety discovered that a photo fo New Jersey AG Gurbir S. Grewal was being used on Clearview’s website to falsely promote its product as having been used in a 2019 child predator sting.

Members of Congress are also expressing concerns over the product. Sen. Ed Markey (D-MA), a vocal critic of Silicon Valley privacy practices and overreach, also sent a letter to Ton-That earlier this month demanding the company provide crucial information about its practices and technology. The list of questions includes information on which law enforcement agencies Clearview is working with, results of internal bias and accuracy tests if there are any, whether the company plans to market its technology to individuals or third-party companies beyond law enforcement, and its child privacy protections, among other info.

“The ways in which this technology could be weaponized are vast and disturbing. Using Clearview’s technology, a criminal could easily find out where someone walking down the street lives or works. A foreign adversary could quickly gather information about targeted individuals for blackmail purposes,” reads Markey’s letter. “Clearview’s product appears to pose particularly chilling privacy risks, and I am deeply concerned that it is capable of fundamentally dismantling Americans’ expectation that they can move, assemble, or simply appear in public without being identified.”

In one particularly dystopian twist, The New York Times reported that Clearview had identified and reached out to police officers who may have been talking with journalists by checking logs of which officers uploaded photos of those journalists into Clearview’s app. “It’s extremely troubling that this company may have monitored usage specifically to tamp down on questions from journalists about the legality of their app,” Sen. Ron Wyden (D-OR) tweeted last Sunday.

Update January 25th, 2:30PM ET: Added new information regarding a cease-and-desist from the New Jersey Attorney General’s Office, and that New Jersey police have been barred from using the app.