Skip to main content

London police chief ‘completely comfortable’ using facial recognition with 98 percent error rate

London police chief ‘completely comfortable’ using facial recognition with 98 percent error rate

/

The technology is being trialled in the UK but has so far led to zero arrests

Share this story

Jonathan McIntosh / Creative Commons

The head of London’s Metropolitan Police force has defended the organization’s ongoing trials of automated facial recognition systems, despite legal challenges and criticisms that the technology is “almost entirely inaccurate.”

According to a report from The Register, UK Metropolitan Police commissioner Cressida Dick said on Wednesday that she did not expect the technology to lead to “lots of arrests,” but argued that the public “expect[s]” law enforcement to test such cutting-edge systems.

Facial recognition is used to scan the faces of crowds at public events

The Met’s use of automated facial recognition technology (AFR) is controversial. The London force is one of several in the UK trialling the technology, which is deployed at public events like concerts, festivals, and soccer matches. Mobile CCTV cameras are used to scan crowds, and tries to match images of faces to mugshots of wanted individuals.

But while facial recognition performs well in controlled environments (like photos taken at borders), they struggle to identify faces in the wild. According to data released under the UK’s Freedom of Information laws, 98 percent of the “matches” made by the Metropolitan’s AFR system are mistakes. (A previous version of this article referred to this as the “false positive rate,” but this was incorrect. A “false positive rate” is the probability that a test result known to be a negative is returned as a positive.)

Of the two correct matches the Met’s technology has made to date, there have been zero arrests. One match was for an individual on an out-of-date watch list; the other for a person with mental health issues who frequently contacts public figures, but is not a criminal and not wanted for arrest. The Met says that AFR systems are constantly monitored by police officers, and that no individuals have been arrested because of a false match.

In China, police have even started using facial recognition-enabled sunglasses.
In China, police have even started using facial recognition-enabled sunglasses.
Credit: AFP/Getty Images

Despite this, Big Brother Watch, the organization that requested the UK data, warns that facial recognition technology is being deployed without proper scrutiny or public debate. The non-profit says automated facial recognition risks turning any and all public spaces into biometric check points, and that the technology could have a chilling effect on free society, with individuals scared to join protests for fear of being misidentified and arrested.

Similar fears are being voiced in the US, where easy-to-use facial recognition tech like Amazon’s Rekognition system is being marketed and sold to law enforcement agencies around the country. A recent report on the topic from advocacy group the EFF said “face recognition is poised to become one of the most pervasive surveillance technologies.”

In the UK, there are two legal challenges underway questioning whether facial recognition technology undermines human rights to privacy and free expression. As The Register reports, when commissioner Dick was asked about this at a hearing this week, she replied that she was “completely comfortable” with the technology’s use, and that the Met’s lawyers were “all over it and have been from the beginning.”

Update July 9th, 09:00AM ET: This article and its headline have been correct to remove the term “false positive rate.”