Skip to main content

San Francisco says it will use AI to reduce bias when charging people with crimes

San Francisco says it will use AI to reduce bias when charging people with crimes

/

The district attorney’s office is calling it a ‘first-in-the-nation’ use

Share this story

Six San Francisco Police Officers Indicted Multiple Corruption Charges
Photo by Justin Sullivan/Getty Images

San Francisco is announcing a “bias mitigation tool” that uses basic AI techniques to automatically redact information from police reports that could identify a suspect’s race. It’s designed to be a way to keep prosecutors from being influenced by racial bias when deciding whether someone gets charged with a crime. The tool will be ready and is scheduled to be implemented on July 1st.

The tool will not only strip out descriptions of race, but also descriptors like eye color and hair color, according to the SF district attorney’s office. The names of people, locations, and neighborhoods that might all consciously or unconsciously tip off a prosecutor that a suspect is of a certain racial background are also removed.

“When you look at the people incarcerated in this country, they’re going to be disproportionately men and women of color,” SF District Attorney George Gascón said in a media briefing today. He pointed out that seeing a name like Hernandez can immediately tell prosecutors that a person is of Latino descent, potentially biasing the outcome.

A DA spokesperson tells The Verge that the tool will remove details about police officers, too, including their badge number, in case the prosecutor happens to know them and might be biased toward or against their report.

“We had to create machine learning around this process.”

Currently, San Francisco uses a much more limited manual process to try to avoid prosecutors seeing these things — the city merely removes the first two pages of the document, but prosecutors get to see the whole rest of the report. “We had to create machine learning around this process,” Gascón said. The district attorney’s office is calling this a “first-in-the-nation” use of this tech, saying it’s unaware of any agency using AI to do this before.

The tool was built by Alex Chohlas-Wood and team at the Stanford Computational Policy Lab, who also helped develop the NYPD’s Patternizr system to automatically search through case files to find patterns of crime. Wood says the new tool is basically just a lightweight web app that uses several algorithms to automatically redact a police report, recognizing words in the report using computer vision and replacing them with generic versions like Location, Officer #1, and so on.

Wood says the tool is in the final stages, was developed at no cost to SF, and will be open-sourced in a matter of weeks for others to adopt. He says it uses a specific technique called named-entity recognition, among other components, to identify what to redact.

Without seeing the system work on real police reports — for legal reasons, the DA’s office said it had to show us a mock-up — it’s unclear how well it might work. When a journalist asked if it would redact other descriptions, such as cross-dressing, Gascón could only say today is a starting point and that the tool will evolve. It’s also only used in the first charging decision in a given arrest. Prosecutors’ final decisions will be based on the full unredacted report. And if the initial charging decision is based on video evidence, that may obviously reveal a suspect’s race as well.

The decision to charge people with a crime is just one, relatively minor place where police bias comes up. When police officers make the decision to arrest a suspect — or worse — that happens long before this process. And one journalist in the audience pointed out that a 2017 study found “people of color receive more serious charges at the initial booking stage,” which can happen many hours before the district attorney’s office steps in with its decision.

It’ll be interesting to see if the new tool helps — currently, AI is better known for introducing biases than removing them, particularly a controversial form known as “predictive policing,” and you can read about that in these examples below.