Skip to main content

Gender and racial bias found in Amazon’s facial recognition technology (again)

Gender and racial bias found in Amazon’s facial recognition technology (again)

/

Research shows that Amazon’s tech has a harder time identifying gender in darker-skinned and female faces

Share this story

Illustration by Alex Castro / Th

As facial recognition systems become more common, Amazon has emerged as a frontrunner in the field, courting customers around the US, including police departments and Immigration and Customs Enforcement (ICE). But experts say the company is not doing enough to allay fears about bias in its algorithms, particularly when it comes to performance on faces with darker skin.

The latest cause for concern is a study published this week by the MIT Media Lab, which found that Rekognition performed worse when identifying an individual’s gender if they were female or darker-skinned. In tests led by MIT’s Joy Buolamwini, Rekognition made no mistakes when identifying the gender of lighter-skinned men, but it mistook women for men 19 percent of the time and mistook darker-skinned women for men 31 percent of the time.

The study follows research Buolamwini conducted last February, which identified similar racial and gender biases in facial analysis software built by Microsoft, IBM, and Chinese firm Megvii. Shortly after Buolamwini shared her results, Microsoft and IBM both said they would improve their software. And, as this latest study found, they did just that.

Since last February, a number of tech companies have voiced concern about the problems with facial recognition. As bias in algorithms is often the result of biased training data, IBM published a curated dataset it said would boost accuracy. Microsoft has gone even further, calling for regulation of the technology to ensure higher standards so that the market does not become a “race to the bottom.”

Read more: HOW SHOULD WE REGULATE FACIAL RECOGNITION?

Amazon, by comparison, has done little to engage with this debate. The company also denied that this recent research suggested anything about the accuracy of its technology. It noted that the researchers had not tested the latest version of Rekognition, and the gender identification test was facial analysis (which spots expressions and characteristics like facial hair), not facial identification (which matches scanned faces to mugshots).

These are two separated software packages, says Amazon. “It’s not possible to draw a conclusion on the accuracy of facial recognition for any use case — including law enforcement — based on results obtained using facial analysis,” Matt Wood, general manager of deep learning and AI at Amazon Web Services, said in a press statement.

Nevertheless, earlier research has found similar problems in Amazon’s facial identification software. A test last year conducted by the ACLU found that while scanning pictures of members of Congress, Rekognition falsely matched 28 individuals with police mugshots. Amazon blamed the results on the poor calibration of the algorithm.

Although bias in facial recognition systems has become a rallying point for experts and researchers who are worried about algorithmic fairness, many warn that it shouldn’t overshadow broader issues. As Buolamwini and co-author Inioluwa Deborah Raji note in their recent paper, just because a facial recognition system performs equally well on different skin colors, that doesn’t stop it from being a tool of injustice or suppression.

The pair writes: “The potential for weaponization and abuse of facial analysis technologies cannot be ignored nor the threats to privacy or breaches of civil liberties diminished even as accuracy disparities decrease.”