Skip to main content

AI researchers tell Amazon to stop selling ‘flawed’ facial recognition to the police

AI researchers tell Amazon to stop selling ‘flawed’ facial recognition to the police

/

Studies show that facial recognition technology frequently has higher error rates for minorities

Share this story

Illustration by Alex Castro / The Verge

AI researchers from Google, Facebook, Microsoft, and a number of top universities have called on Amazon to stop selling its facial recognition technology to law enforcement.

In an open letter published today, researchers say studies have repeatedly shown that Amazon’s algorithms are flawed, with higher error rates for darker-skinned and female faces. The researchers say that if such technology is adopted by the police, it has the potential to amplify racial discrimination, create cases of mistaken identity, and encourage intrusive surveillance of marginalized groups.

“Flawed facial analysis technologies are reinforcing human biases.”

“Flawed facial analysis technologies are reinforcing human biases,” Morgan Klaus Scheuerman, a PhD student at the University of Colorado Boulder and one of 26 signatories of the letter, tells The Verge over email. Scheuerman says that such technologies “can be appropriated for malicious intent ... in ways that the companies supplying them aren’t aware of.”

Other signatories include Timnit Gebru, a Google researcher whose work has highlighted flaws in facial recognition algorithms; Yoshua Bengio, an AI researcher who was recently awarded the Turing Award; and Anima Anandkumar, a Caltech professor and former principal scientist at Amazon’s AWS subsidiary.

Anandkumar tells The Verge over email that she hopes the letter will open up a “public dialogue on how we can evaluate face recognition,” adding that technical frameworks are needed to vet this technology. “Government regulation can only come about once we have laid out technical frameworks to evaluate these systems,” Anandkumar says.

As one of the leading vendors of facial recognition technology, Amazon’s algorithms have been repeatedly scrutinized in this way. A study published earlier this year showed that the company’s software has a harder time identifying the gender of darker-skinned men and women, while a test conducted in 2018 by the ACLU found that Amazon’s Rekognition software incorrectly matched photos of 28 members of Congress to police mugshots. 

In China, police officers wear sunglasses with built-in facial recognition to spot criminals in public places.
In China, police officers wear sunglasses with built-in facial recognition to spot criminals in public places.

Amazon has defended its technology, and much of the letter published today offers a point-by-point rebuttal of the company’s criticisms. The authors note, for example, that although Amazon says it’s received no reports of law enforcement misusing its facial recognition, that’s not a meaningful statement since there are no laws in place to audit its applications.

In response to the open letter Amazon reiterated that it thought its critics’ evaluations were “misleading” and that subsequent updates to the technology offered “improvements in virtually every area of the service.”

As studies find more flaws in this technology, protests among researchers, shareholders, and tech employees are occurring with more frequency. Google refuses to sell facial recognition software because of its potential for abuse, while Microsoft has called for government regulations. Exactly how this technology should be overseen is a difficult question, however.

“These technologies need to be developed in a way that does not harm human beings, or perpetuate harm to already historically marginalized groups,” Scheuerman says. He stresses that, in order to do so, there needs to be more dialogue between the individuals developing facial recognition technology and those critiquing it. “This is an interdisciplinary problem that requires an interdisciplinary approach.”

Update Tuesday April 9th, 12:00PM ET: Updated with Amazon’s response.