London police chief ‘completely comfortable’ using facial recognition with 98 percent error rate

The head of London’s Metropolitan Police force has defended the organization’s ongoing trials of automated facial recognition systems, despite legal challenges and criticisms that the technology is “almost entirely inaccurate.”

According to a report from The Register, UK Metropolitan Police commissioner Cressida Dick said on Wednesday that she did not expect the technology to lead to “lots of arrests,” but argued that the public “expect[s]” law enforcement to test such cutting-edge systems.

The Met’s use of automated facial recognition technology (AFR) is controversial. The London force is one of several in the UK trialling the technology, which is deployed at public events like concerts, festivals, and soccer matches. Mobile CCTV cameras are used to scan crowds, and tries to match images of faces to mugshots of wanted individuals.

But while facial recognition performs well in controlled environments (like photos taken at borders), they struggle to identify faces in the wild. According to data released under the UK’s Freedom of Information laws, 98 percent of the “matches” made by the Metropolitan’s AFR system are mistakes. (A previous version of this article referred to this as the “false positive rate,” but this was incorrect. A “false positive rate” is the probability that a test result known to be a negative is returned as a positive.)

Of the two correct matches the Met’s technology has made to date, there have been zero arrests. One match was for an individual on an out-of-date watch list; the other for a person with mental health issues who frequently contacts public figures, but is not a criminal and not wanted for arrest. The Met says that AFR systems are constantly monitored by police officers, and that no individuals have been arrested because of a false match.

In China, police have even started using facial recognition-enabled sunglasses.
Credit: AFP/Getty Images

Despite this, Big Brother Watch, the organization that requested the UK data, warns that facial recognition technology is being deployed without proper scrutiny or public debate. The non-profit says automated facial recognition risks turning any and all public spaces into biometric check points, and that the technology could have a chilling effect on free society, with individuals scared to join protests for fear of being misidentified and arrested.

Similar fears are being voiced in the US, where easy-to-use facial recognition tech like Amazon’s Rekognition system is being marketed and sold to law enforcement agencies around the country. A recent report on the topic from advocacy group the EFF said “face recognition is poised to become one of the most pervasive surveillance technologies.”

In the UK, there are two legal challenges underway questioning whether facial recognition technology undermines human rights to privacy and free expression. As The Register reports, when commissioner Dick was asked about this at a hearing this week, she replied that she was “completely comfortable” with the technology’s use, and that the Met’s lawyers were “all over it and have been from the beginning.”

Update July 9th, 09:00AM ET: This article and its headline have been correct to remove the term “false positive rate.”

Comments

Dick said on Wednesday that she did not expect the technology to lead to "lots of arrests," but argued that the public "expect[s]" law enforcement to test such cutting-edge systems.

Nope, we expect that you should assess such systems – and if they look like a waste of time, you should not waste our money testing them. What a dick…..

How do you know they "look" like a waste of time without proper testing?

Because some biased organization releases their stats?

Testing doesn’t cost nearly as much money as manpower and equipment supplied to thousands of agents and officers.

Stick it out your window and have it try and identify your own employees. Obviously, if it fails, it sucks.

Testing doesn’t cost nearly as much money as manpower and equipment supplied to thousands of agents and officers.

You have to buy and deploy something before you can test it. That’s exceptionally expensive. Generally you only buy something when you have a good level of confidence its a product actually fit for purpose, which is why products are evaluated beforehand.

What appears to be happening here is an ideological push to spend money on unproven and unreliable technology in order to drive its development. Product development is not the Job of the Met. Their job is to police london.

They’ll have another Jean Charles de Menezes soon.

Which had nothing to do with facial recognition and everything to do with poor command and control.

That is the whole point of a trial period. To see if and how it can be used. This technology will won’t get better nearly as quick if there aren’t some attempts to use it in the real world.

If used properly I am perfectly fine with the police using facial recognition software. Day to day usage should not be great, at least here in the states. We do not have a large enough monitoring system, ie cameras, setup to make it worth it. Have the software so that feeds can be put through it such as if there was an incident and you are trying to identify the culprit would be great.

Or even large event to make sure unwanted dangerous people do not get it. It would not keep track of who was there, just whether or not certain people enter.

I can understand the worries but in many of these cases it is no different then a police officer sitting in front of the screens looking for certain faces.

The problem with this is that it normalizes the idea that people should be monitored by the government at all times, which is obviously insane.

If someone is brought into a station because it’s believed (with good evidence) that they may have committed a crime, then sure – they’re now directly under investigation, and a scan to see if they’re already on a wanted list somewhere is fine.

Aside from that, fuck no. Nobody should be okay with that.

We already all have the expectation of being monitored by the government, by corporations and by advertisers. Do you want your monitoring to be less accurate or more accurate? That is the only relevant question right now. I do not state this lightly nor have a strong opinion either way because there are just as many good reasons to make the monitoring less accurate than there are to make it more accurate.

"We already all have the expectation of being monitored by the government, by corporations and by advertisers."

Who’s "we"? I certainly have no such expectation.

Just "nope".

… despite legal challenges and criticisms that the technology is "almost entirely inaccurate."

That assumes that London’s Metropolitan Police force actually wants to arrest criminals. It’s hostility toward citizens who report crimes like forced entry into homes and its inability to do anything about the city’s skyrocketing violence suggests it does not.

Judged by its behavior, London’s police officials have a different priority. They want to disarm (and dis-knife) ordinary citizens and monitor their every action in order to render them frightened, sheep-like, and easily manipulated. Omnipresent cameras, however ineffective at capturing actual criminals, does that.

As others have pointed out, many in the British government seem to regard Orwell’s 1984 as a "how to" manual.

View All Comments
Back to top ↑