Facebook has partnered with law enforcement in the UK to obtain footage to train its automated content moderation tools. Starting in October, the UK’s Metropolitan Police Service will provide bodycam footage taken during its firearms training exercises, which Facebook will use to train its video recognition AI. The aim is to automatically identify footage of an attack, remove it, and notify the police. Facebook is currently exploring similar partnerships with law enforcement agencies in the US, according to the Financial Times.
The new initiative comes in the wake of Facebook’s inability to prevent a mass shooting from being live-streamed on its platform. Facebook said that the Christchurch shooting was viewed 200 times during its live broadcast, and 4,000 times in total before it was removed. In the 24 hours following the incident, Facebook says it removed 1.5 million videos of the attack from its platform. Of these, 1.2 million were blocked “at upload,” meaning 300,000 of them slipped through Facebook’s automated systems.
“The video of the attack in Christchurch did not prompt our automatic detection systems because we did not have enough content depicting first-person footage of violent events to effectively train our machine learning technology,” Facebook said in a press release. Getting more footage from law enforcement should improve these detection systems, Facebook says, as well as cutting down on footage from video games or movies being incorrectly detected.
The footage from the Metropolitan Police will include training drills of terrorist incidents and hostage situations across land, public transport, and water-based locations. The footage will also improve Facebook’s detection systems on Instagram, The Guardian reports. As well as sharing the footage with Facebook, the Metropolitan Police will also pass the footage on to the UK’s Home Office to share with other technology firms. Financial Times reports that Facebook will not pay the police for this footage, but will provide the cameras free of charge.
This isn’t the only change Facebook has made in the wake of the shootings. Back in May, the company imposed new restrictions on live-streaming with a “one strike” policy that bans users from using its live-streaming service for a set period of time after just a single violation of the platform’s community standards. The company says it is also using automated techniques to attempt to remove terrorist and hate organizations from its platform.