Skip to main content

A new lawsuit may force YouTube to own up to the mental health consequences of content moderation

A new lawsuit may force YouTube to own up to the mental health consequences of content moderation


Facebook agreed to pay out $52 million to moderators suffering from PTSD and other conditions — and now YouTube is being asked to do the same

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

Illustration by Alex Castro / The Verge

For big tech platforms, one of the more urgent questions to arise during the pandemic’s early months was how the forced closure of offices would change their approach to content moderation. Facebook, YouTube, and Twitter all rely on huge numbers of third-party contract workers to police their networks, and traditionally those workers have worked side by side in big offices. When tech companies shuttered their offices, they closed down most of their content moderation facilities as well.

Happily, they continued to pay their moderators — even those who could no longer work, because their jobs required them to use secure facilities. But with usage of social networks surging and an election on the horizon, the need for moderation had never been greater. And so Silicon Valley largely shifted moderation duties to automated systems.

The question was whether it would work — and this week, we began to get some details. Both Facebook and YouTube had warned that automated systems would make more mistakes than human beings. And they were right. Here’s James Vincent in The Verge:

Around 11 million videos were removed from YouTube between April and June, says the FT, or about double the usual rate. Around 320,000 of these takedowns were appealed, and half of the appealed videos were reinstated. Again, the FT says that’s roughly double the usual figure: a sign that the AI systems were over-zealous in their attempts to spot harmful content.

As YouTube’s chief product officer, Neal Mohan, told the FT: “One of the decisions we made [at the beginning of the pandemic] when it came to machines who couldn’t be as precise as humans, we were going to err on the side of making sure that our users were protected, even though that might have resulted in [a] slightly higher number of videos coming down.”

It turns out that automated systems didn’t take down a slightly higher number of videos — they took down double the number of videos. This is worth thinking about for all of us, but especially those who complain that technology companies censor too much content. For a lot of reasons — some of which I’ll get to in a minute — companies like YouTube are under increasing pressure to both remove more bad posts and to do so automatically. Those systems will surely improve over time, but the past few months have shown us the limits of that approach. They’ve also shown that when you pressure tech companies to remove more harmful posts — for good reasons — the tradeoff is an uptick in censorship.

We almost never talk about those two pressures in tandem, and yet it’s essential for crafting solutions that we can all live with.

There’s another, more urgent tradeoff in content moderation: the use of automated systems that are error-prone but invincible, versus the use of human beings who are much more skilled but vulnerable to the effects of the job.

Last year, I traveled to Austin and to Washington, DC to profile current and former moderators for YouTube and Google. I spent most of my time with people who work on YouTube’s terror queue — the ones who examine videos of violent extremism each day to remove it from the company’s services. It was part of a year-long series I did about content moderators that attempted to document the long-term consequences of doing this work. And at YouTube, just as at Facebook, many of the moderators I spoke to suffer from post-traumatic stress disorder.

One of those moderators, who I called Peter in the story, described his daily life to me this way:

Since he began working in the violent extremism queue, Peter noted, he has lost hair and gained weight. His temper is shorter. When he drives by the building where he works, even on his off days, a vein begins to throb in his chest.

“Every day you watch someone beheading someone, or someone shooting his girlfriend,” Peter tells me. “After that, you feel like wow, this world is really crazy. This makes you feel ill. You’re feeling there is nothing worth living for. Why are we doing this to each other?”

I thought of Peter this week while reading about a proposed new lawsuit filed on behalf of workers like him. Here’s Queenie Wong at CNET:

A former content moderator is suing Google-owned YouTube after she allegedly developed depression and symptoms associated with post-traumatic stress disorder from repeatedly watching videos of beheadings, child abuse and other disturbing content.

“She has trouble sleeping and when she does sleep, she has horrific nightmares. She often lays awake at night trying to go to sleep, replaying videos that she has seen in her mind,” says the lawsuit, which was filed in a California superior court on Monday. The former moderator also can’t be in crowded places because she’s afraid of mass shootings, suffers from panic attacks and has lost friends because of her anxiety. She also has trouble being around kids and is now frightened to have children, according to the lawsuit. 

The law firm involved in the suit was also part of a similar suit against Facebook, Wong reported. That’s a significant detail in large part of what Facebook did in that case: agree to settle it, for $52 million. That settlement, which still requires final approval from a judge, applies only to Facebook’s US moderators. And with similar suits pending around the world, the final cost to Facebook will likely be much higher.

After talking to more than 100 content moderators at services of all sizes, it seems clear that the work can take a similar toll no matter where they might have worked. Only a fraction of employees may develop full-blown PTSD from viewing disturbing content daily, but others will develop other serious mental health conditions. And because tech companies have largely outsourced this work to vendors, that cost has largely been hidden to them.

I asked YouTube what it made of the new lawsuit.

“We cannot comment on pending litigation, but we rely on a combination of humans and technology to remove content that violates our Community Guidelines, and we are committed to supporting the people who do this vital and necessary work,” a spokesman said. “We choose the companies we partner with carefully and work with them to provide comprehensive resources to support moderators’ well-being and mental health, including by limiting the time spent each day reviewing content.”

Facebook told me all the same things, before agreeing to pay out $52 million.

Anyway, I write about these stories in tandem today to highlight just how hard the tradeoffs are here. Rely too much on machines and they’ll remove lots of good speech. Rely too much on human beings and they’ll wind up with debilitating mental health conditions. So far, no global-scale technology company has managed to get this balance right. In fact, we still have no real agreement on what getting it “right” would even look like.

We do know, however, that employers are responsible for protecting their moderators’ health. It took a lawsuit from contractors to get Facebook to acknowledge the harms of moderating extremist content. And when this new lawsuit is ultimately resolved, I’d be surprised if YouTube weren’t forced to acknowledge that, too.

The Ratio

Today in news that could affect public perception of the big tech platforms.

🔼 Trending upFacebook removed networks of accounts run from China that promoted and criticized both President Trump and Joe Biden. It’s the company’s first takedown of Chinese accounts aimed at US politics. (Craig Timberg / The Washington Post)

🔼 Trending up: Twitter rolled out its biggest push yet to encourage Americans to vote. The company is prompting every person in the United States to register to vote Tuesday, which was National Voter Registration Day. (Jessica Guynn / USA Today)


Russian President Vladimir Putin is “probably directing” a foreign influence operation to interfere in the 2020 presidential election against Joe Biden, according to a CIA assessment. The assessment describes how Ukrainian lawmaker Andriy Derkach is disseminating negative information about Biden in the US. Josh Rogin at The Washington Post has the story:

The CIA assessment described Derkach’s efforts in detail and said that his activities have included working through lobbyists, members of Congress and U.S. media organizations to disseminate and amplify his anti-Biden information. Though it refers to Derkach’s interactions with a “prominent” person connected to the Trump campaign, the analysis does not identify the person. Giuliani, who has been working with Derkach publicly for several months, is not named in the assessment. [...]

On Sept. 10, following calls from Democratic lawmakers, the Treasury Department sanctioned Derkach, alleging that he “has been an active Russian agent for over a decade, maintaining close connections with the Russian Intelligence Services.” Treasury Secretary Steven Mnuchin said in a Sept. 10 statement that “Derkach and other Russian agents employ manipulation and deceit to attempt to influence elections in the United States and elsewhere around the world.” The Treasury Department stated Derkach “waged a covert influence campaign centered on cultivating false and unsubstantiated narratives concerning U.S. officials in the upcoming 2020 Presidential Election,” which he did by releasing edited audio tapes and other unsupported information that were then pushed in Western media.

The Justice Department is expected to brief state attorneys general this week about its plans to file an antitrust lawsuit against Google. The probe initially focused on the company’s advertising business but has since come to encompass its dominance in online search. Here’s Tony Romm at The Washington Post:

The department had been eyeing a September lawsuit against Google. U.S. Attorney General William P. Barr this summer sought to speed up the agency’s work, overruling dozens of federal agents who said they needed additional time before they could file a case against Google, The Washington Post previously reported.

A conservative advocacy group is running misleading ads on Google about voter fraud in Florida. While the headline reads “Florida Election Officials Busted For Massive Voter Fraud” the body of the ad talks about education initiatives. (Election Integrity Partnership)

Facebook said it will “restrict the circulation of content” on its platform if the US presidential election descends into chaos or violent civic unrest. (Reuters)

Big Tech companies could do more to protect the election, this piece argues. They could make Election Day a company holiday, pay for personal protective equipment for poll workers, and give discounted rides to the polls, to name a few. (Charlie Warzel / The New York Times)

The family that organized the pro-Trump rally in Portland that resulted in a fatal shooting had almost no political profile prior to the event. They rallied Trump supporters using online accounts, including on Facebook, that didn’t reveal their full names. (Isaac Stanley-Becker, Joshua Partlow and Carissa Wolf / The Washington Post)

TikTok removed 104.5 million videos in the first half of this year for violating its community guidelines or terms of service, according to the latest transparency report. The company also got nearly 1,800 legal requests and received 10,600 copyright takedown notices. (Ingrid Lunden / TechCrunch)

YouTube is rolling out AI-powered tools to catch more videos that may require age restrictions. The move will likely mean more viewers will be asked to sign into their accounts to verify their age before watching. (Julia Alexander / The Verge)

Prop. 24 was supposed to patch the holes in the California Consumer Privacy Act. But privacy advocates disagree about whether it would expand consumer privacy or restrict it. (Gilad Edelman / Wired)

The Los Angeles Police Department has used facial recognition software nearly 30,000 times since 2009, despite repeatedly denying that it uses the software at all. Civil liberties advocates say the denials are part of a pattern of deception from the department. (Kevin Rector and Richard Winton / Los Angeles Times)

A court in Australia is ordering a conspiracy theorist to pay nearly a million dollars to a politician she defamed on Facebook. The conspiracy theorist said, with no evidence, the politician was part of a pedophile network, an idea aligned with QAnon. (Cam Wilson / Gizmodo)


Instagram co-founders Kevin Systrom and Mike Krieger built a COVID-19 tracker called to help people understand how the virus spreads. It’s become a crucial way to understand how relaxing shelter-in-place restrictions impacts the pandemic. Here’s a bit from their interview with Wired’s Steven Levy: is interesting because at times it doesn’t seem in sync with the caseload numbers you see reported other places. A few months ago, for instance, New York was over 4 on your chart and obviously in trouble. Now everyone says it’s one of the safest places. Yet I checked today and the R was over 1.07. That’s bad! Does that mean the numbers are going to go up?

Systrom: Yes, and in fact they are going up. Think of it as a forest fire — what we’re dealing with in California right now. If you have any enormous block of land on fire and it’s growing quickly, that is really bad. If you have a small plot of land on fire that’s growing at the same rate as that big one, that is also really bad, but it’s less bad because you are starting from a smaller base. So four is really, really bad — it squares with what you felt in March, with New York being a place that was reporting at peak somewhere over 10,000 positive tests per day. But then shelter started, and very quickly you saw infections start to drop. So R went below 1.0, which is good. It means the virus is under control. What you are experiencing in New York right now [with a 1.07 R] is that it’s a small fire growing, but not nearly at the rate that it was back in March. Remember. I said 1.0 is the smoldering level.

Morgan Beller, who led Facebook’s Libra strategy, is joining the venture capital firm NFX as its fourth general partner. Beller is known as being the person who convinced Facebook to launch its own currency. The fact that she’s leaving before it launches would not seem to bode well. (Melia Russell / Business Insider)

Companies are capitalizing on the “link in bio” trend to allow creators to design simple websites. The need to link out to other areas of the web has grown during the pandemic, when people are promoting resources and side hustles. (Ashley Carman / The Verge)

Ruth Porat, chief financial officer of Alphabet and Google, is leading the organizations’ $800 million small business and COVID-19 relief effort. “It’s so clear that small businesses are the lifeblood of the American economy,” she says. (Maneet Ahuja / Forbes)

And finally...

Talk to us

Send us tips, comments, questions, and non-violating YouTube videos: and