Skip to main content

Go read this NYT expose on a creepy new facial recognition database used by US police

Go read this NYT expose on a creepy new facial recognition database used by US police

/

Hundreds of agencies are using a system which scraped billions of photos from the internet

Share this story

Facial Recognition
Illustration by James Bareham / The Verge

Hundreds of law enforcement agencies across the US have started using a new facial recognition system from Clearview AI, a new investigation by The New York Times has revealed. The database is made up of billions of images scraped from millions of sites including Facebook, YouTube, and Venmo. The Times says that Clearview AI’s work could “end privacy as we know it,” and the piece is well worth a read in its entirety.

The use of facial recognition systems by police is already a growing concern, but the scale of Clearview AI’s database, not to mention the methods it used to assemble it, is particularly troubling. The Clearview system is built upon a database of over three billion images scraped from the internet, a process which may have violated websites’ terms of service. Law enforcement agencies can upload photos of any persons of interest from their cases, and the system returns matching pictures from the internet, along with links to where these images are hosted, such as social media profiles.

The NYT says the company’s work could “end privacy as we know it”

The NYT says the system has already helped police solve crimes including shoplifting, identify theft, credit card fraud, murder, and child sexual exploitation. In one instance, Indiana State Police were able to solve a case within 20 minutes by using the app.

The use of facial recognition algorithms by police carry risks. False positives can incriminate the wrong people, and privacy advocates fear their use could help to create a police surveillance state. Police departments have reportedly used doctored images that could lead to wrongful arrests, and a federal study has uncovered “empirical evidence” of bias in facial recognition systems.

Using the system involves uploading photos to Clearview AI’s servers, and it’s unclear how secure these are. Although Clearview AI says its customer-support employees will not look at the photos that are uploaded, it appeared to be aware that Kashmir Hill (the Times journalist investigating the piece) was having police search for her face as part of her reporting:

While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.

The Times reports that the system appears to have gone viral with police departments, with over 600 already signed up. Although there’s been no independent verification of its accuracy, Hill says the system was able to identify photos of her even when she covered the lower half of her face, and that it managed to find photographs of her that she’d never seen before.

One expert quoted by The Times said that the amount of money involved with these systems means that they need to be banned before the abuse of them becomes more widespread. “We’ve relied on industry efforts to self-police and not embrace such a risky technology, but now those dams are breaking because there is so much money on the table,” said a professor of law and computer science at Northeastern University, Woodrow Hartzog, “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.”