Skip to main content

IBM will no longer offer, develop, or research facial recognition technology

IBM will no longer offer, develop, or research facial recognition technology

/

IBM’s CEO says we should reevaluate selling the technology to law enforcement

Share this story

Illustration by Alex Castro / The Verge

IBM will no longer offer general purpose facial recognition or analysis software, IBM CEO Arvind Krishna said in a letter to Congress today. The company will also no longer develop or research the technology, IBM tells The Verge. Krishna addressed the letter to Sens. Cory Booker (D-NJ) and Kamala Harris (D-CA) and Reps. Karen Bass (D-CA), Hakeem Jeffries (D-NY), and Jerrold Nadler (D-NY).

“IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” Krishna said in the letter. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

Facial recognition software has come under scrutiny for issues with racial bias and privacy concerns

Facial recognition software has improved greatly over the last decade thanks to advances in artificial intelligence. At the same time, the technology — because it is often provided by private companies with little regulation or federal oversight — has been shown to suffer from bias along lines of age, race, and ethnicity, which can make the tools unreliable for law enforcement and security and ripe for potential civil rights abuses.

In 2018, research by Joy Buolamwini and Timnit Gebru revealed for the first time the extent to which many commercial facial recognition systems (including IBM’s) were biased. This work and the pair’s subsequent studies led to mainstream criticism of these algorithms and ongoing attempts to rectify bias.

A December 2019 National Institute of Standards and Technology study found “empirical evidence for the existence of a wide range of accuracy across demographic differences in the majority of the current face recognition algorithms that were evaluated,” for example. The technology has also come under fire for its role in privacy violations.

Notably, NIST’s study did not include technology from Amazon, which is one of the few major tech companies to sell facial recognition software to law enforcement. Yet Rekognition, the name of the program, has also been criticized for its accuracy. In 2018, the American Civil Liberties Union found that Rekognition incorrectly matched 28 members of Congress to faces picked from 25,000 public mugshots, for example.

Another company, Clearview AI, has come under heavy scrutiny starting earlier this year when it was discovered that its facial recognition tool, built with more than 3 billion images compiled in part from scraping social media sites, was being widely used by private sector companies and law enforcement agencies. Clearview has since been issued numerous cease and desist orders and is at the center of a number of privacy lawsuits. Facebook was also ordered in January to pay $550 million to settle a class-action lawsuit over its unlawful use of facial recognition technology.

IBM has tried to help with the issue of bias in facial recognition, releasing a public data set in 2018 designed to help reduce bias as part of the training data for a facial recognition model. But IBM was also found to be sharing a separate training data set of nearly one million photos in January 2019 taken from Flickr without the consent of the subjects — though the photos were shared under a Creative Commons license. IBM told The Verge in a statement at the time that the data set would only be accessed by verified researchers and only included images that were publicly available. The company also said that individuals can opt-out of the data set.

In his letter, Krishna also advocated for police reform, arguing that more police misconduct cases should be put under the purview of federal court and that Congress should make changes to qualified immunity doctrine, among other measures. In addition, Krishna said that “we need to create more open and equitable pathways for all Americans to acquire marketable skills and training,” and he suggested Congress consider scaling the P-TECH school model nationally and expanding eligibility for Pell Grants.

Update, June 9th, 2:45AM ET: This story has been updated to reference the work of AI researchers Joy Buolamwini and Timnit Gebru, whose 2018 Gender Shades project provided the first comprehensive empirical data on bias in facial recognition systems.