Skip to main content

UK creates machine learning algorithm for small video sites to detect ISIS propaganda

UK creates machine learning algorithm for small video sites to detect ISIS propaganda


The software will be offered to web platforms that can’t afford to develop their own tools

Share this story

Iraqi forces pose for a photo in Hawija in October 2017 after retaking the city from ISIS.
Photo credit should read AHMAD AL-RUBAYE/AFP/Getty Images

The UK government has funded the creation of a machine learning algorithm that can be used to detect ISIS propaganda videos online.

It’s the latest move by the government to combat the distribution of extremist material on the internet. The tool was created by London-based startup ASI Data Science and cost £600,000 ($830,000) to develop. It will be offered to smaller video platforms and cloud storage sites like Vimeo and pCloud in order to vet their content. It won’t, however, be used by the biggest tech companies, including YouTube and Facebook, which are developing their own algorithms to detect extremist content.

According ASI, the algorithm can detect 94 percent of ISIS propaganda with 99.99 percent accuracy. It incorrectly identifies around 0.005 percent of videos it scans. This means, on a site with 5 million videos uploaded each day, it would incorrectly flag 250 for review by human moderators. ASI isn’t publicly sharing the factors the software uses to weigh its decision, but according to BBC News, the algorithm “draws on characteristics typical of IS and its online activity.” This might include visual cues, like logos, but also metadata, like where the video was uploaded from.

“The purpose of these videos is to incite violence in our communities, recruit people to their cause, and attempt to spread fear in our society,” said UK home secretary Amber Rudd. “We know that automatic technology like this can heavily disrupt the terrorists’ actions, as well as prevent people from ever being exploited to these horrific images.”

Tech companies have increasingly come under pressure in both the US and the UK to police the content uploaded to their sites. Last December, YouTube said it had removed more than 150,000 videos promoting violent extremism, and said its algorithms flagged 98 percent of suspect videos. Facebook went one better and said that its own system removes 99 percent of ISIS and Al Qaeda terror-related content.

However, experts caution that algorithmic approaches like this will never create a perfect solution for finding and removing this content. Instead, it creates a cat-and-mouse game, with propagandists looking for new ways to evade automatic moderators, and tech companies adapting.