Skip to main content

Why AI can’t fix content moderation

Why AI can’t fix content moderation

/

Behind the Screen: Content Moderation in the Shadows of Social Media

Share this story

An image showing a graphic of a brain on a black background
Illustration by Alex Castro / The Verge

Content moderation is a long-standing challenge for big tech companies. Many of the issues surrounding content moderation have been reported on extensively by The Verge, and they’re now the focus of UCLA professor Sarah T. Roberts’ new book Behind the Screen: Content Moderation in the Shadows of Social Media.

Below is a lightly edited excerpt of Roberts and Verge editor-in-chief Nilay Patel’s discussion about why artificial intelligence is not the solution to the content moderation problem.

You can hear this and more in the latest episode of The Vergecast.

My colleague and Silicon Valley editor Casey Newton says to me, “If you live in a world where your dream is to replace human beings with math, then of course you’re going to treat the human beings poorly.” AI is designed to take the place of people. That’s why these content moderators are contractors. Have you encountered this sort of AI vision of content moderation? Have you seen attempts to build it? Do you think it works? 

First off, it’s a fundamental cultural and political orientation to work. There is an inherent belief that those systems are somehow less biased, that they can scale better, and they’re just somehow preferable. I would argue that there’s a lot that goes unsaid in such an attitude. Here are some things that algorithms don’t do: they don’t form a union, they don’t agitate for better working conditions, they don’t leak stories to journalists and academics. So we have to be very critical about that notion. 

But yes, since 2010, as I looked at the work life and the behavior of moderators on the job and what they were being asked to do, it was very clear to me that the processes they undertook were binary decision trees. If then, if this is present, then do this. If this is not present in an adequate amount, then leave and go to line 20. And that’s an algorithm kind of thinking that not only is endemic to the culture but also would go very easily toward building a computational system tool that could replicate.

So one of the things that I’ve been seeing as a trend more recently is the fact that there are entirely new pockets of what I consider to be commercial content moderation work that have opened up that may go by another name. And now what we see is a bunch of humans whose full-time job is, rather than to deal with live content on a system, is to train datasets for machine learning tools so that their decisions on a particular piece of content or a set of prescreened images are then captured and put back into a computational system with the hope of using that to replicate. And then, ultimately, replace the humans.

That said, if you talk to actual industry insiders who will speak candidly and who are actually working directly in this area, they will tell you that there is no time that they can envision taking humans entirely out of this loop. And I believe that to be true. If for no other reason than what I just described, we need human intelligence to train the machines right. 

And people are always going to try to defeat the algorithm. 

They’re going to try to defeat it. They’re going to try to game it. We can’t possibly imagine all the scenarios that will come online. And of course, those decisions need to be vetted at various points along the decision-making chain. At best, what we’ll have and what we’ll continue to have is a hybrid. But over the past few years, all I’ve seen is an increase in hiring not a decrease.

The Vergecast /

Weekly tech roundup and interviews with major figures from the tech world.

Subscribe