Skip to main content

Google releases free AI tool to help companies identify child sexual abuse material

Google releases free AI tool to help companies identify child sexual abuse material

/

The software assists human moderators by sorting high-priority cases

Share this story

Illustration by Alex Castro / The Verge

Stamping out the spread of child sexual abuse material (CSAM) is a priority for big internet companies. But it’s also a difficult and harrowing job for those on the frontline — human moderators who have to identify and remove abusive content. That’s why Google is today releasing free AI software designed to help these individuals.

Most tech solutions in this domain work by checking images and videos against a catalog of previously identified abusive material. (See, for example: PhotoDNA, a tool developed by Microsoft and deployed by companies like Facebook and Twitter.) This sort of software, known as a “crawler,” is an effective way to stop people sharing known previously-identified CSAM. But it can’t catch material that hasn’t already been marked as illegal. For that, human moderators have to step in and review content themselves.

Google’s AI tool triages flagged material, helping moderators work faster

This is where Google’s new AI tool will help. Using the company’s expertise in machine vision, it assists moderators by sorting flagged images and videos and “prioritizing the most likely CSAM content for review.” This should allow for a much quicker reviewing process. In one trial, says Google, the AI tool helped a moderator “take action on 700 percent more CSAM content over the same time period.”

Speaking to The Verge, Fred Langford, deputy CEO of the Internet Watch Foundation (IWF), said the software would “help teams like our own deploy our limited resources much more effectively.” “At the moment we just use purely humans to go through content and say, ‘yes,’ ‘no,” says Langford. “This will help with triaging.”

The IWF is one of the largest organizations dedicated to stopping the spread of CSAM online. It’s based in the UK but funded by contributions from big international tech companies, including Google. It employs teams of human moderators to identify abuse imagery, and operates tip-lines in more than a dozen countries for internet users to report suspect material. It also carries out its own investigative operations; identifying sites where CSAM is shared and working with law enforcement to shut them down.

Langford says that because of the nature of “fantastical claims made about AI,” the IWF will be testing out Google’s new AI tool thoroughly to see how it performs and fits with moderators’ workflow. He added that tools like this were a step towards fully automated systems that can identify previously unseen material without human interaction at all. “That sort of classifier is a bit like the Holy Grail in our arena.”

But, he added, such tools should only be trusted with “clear cut” cases to avoid letting abusive material slip through the net. “A few years ago I would have said that sort of classifier was five, six years away,” says Langford. “But now I think we’re only one or two years away from creating something that is fully automated in some cases.”