Skip to main content

Use of Clearview AI facial recognition tech spiked as law enforcement seeks to identify Capitol mob

Use of Clearview AI facial recognition tech spiked as law enforcement seeks to identify Capitol mob


The company’s CEO said use of its tech was up 26 percent the day after the January 6th attack

Share this story

Photo by SAUL LOEB/AFP via Getty Images

Clearview AI’s CEO says that use of his company’s facial recognition technology among law enforcement spiked 26 percent the day after a mob of pro-Trump rioters attacked the US Capitol. First reported by the New York Times, Hoan Ton-That confirmed to The Verge that Clearview saw a sharp increase in use on January 7th, compared to its usual weekday search volume.

The January 6th attack was broadcast live on cable news, and captured in hundreds of images and live streams that showed the faces of rioters breaching the Capitol building. The FBI and other agencies have asked for the public’s help to identify participants. According to the Times, the Miami Police Department is using Clearview to identify some of the rioters, sending possible matches to the FBI Joint Terrorism Task Force. And The Wall Street Journal reported that an Alabama police department was also using Clearview to identify faces in images from the riot and sending information to the FBI.

Unlike other facial recognition used by authorities, which use images such as driver’s license photos and mug shot photos, Clearview’s database of some 3 billion images was scraped from social media and other websites, as revealed in a Times investigation last year. In addition to raising serious concerns about privacy, the practice of taking images from social media violated the platforms’ rules, and tech companies sent numerous cease and desist orders to Clearview in the wake of the investigation.

Nathan Freed Wessler, deputy director of the ACLU’s Speech, Privacy, and Technology Project said in an email to The Verge that while facial recognition tech is not regulated by federal law, its potential for mass surveillance of communities of color have rightly led state and local governments across the country to ban its use by law enforcement.” Wessler argued that if use of the technology by police departments is normalized, “we know who it will be used against most: members of Black and Brown communities who already suffer under a racist criminal enforcement system.”

Clearview AI said in May it would stop selling its technology to private companies and instead provide it for use by law enforcement only. Some 2,400 law enforcement agencies across the US use Clearview’s software, according to the company.

Update January 10th, 12:49PM ET: Added comment from the ACLU