Skip to main content

Undercover Facebook moderator was instructed not to remove fringe groups or hate speech

Undercover Facebook moderator was instructed not to remove fringe groups or hate speech

/

A new documentary details how third-party Facebook moderators ignore the company’s rules

Share this story

Illustration by William Joel / The Verge

An investigative journalist who went undercover as a Facebook moderator in Ireland says the company lets pages from far-right fringe groups “exceed deletion threshold,” and that those pages are “subject to different treatment in the same category as pages belonging to governments and news organizations.” The accusation is a damning one, undermining Facebook’s claims that it is actively trying to cut down on fake news, propaganda, hate speech, and other harmful content that may have significant real-world impact.

The undercover journalist detailed his findings in a new documentary titled Inside Facebook: Secrets of the Social Network, that just aired on the UK’s Channel 4. The investigation outlines questionable practices on behalf of CPL Resources, a third-party content moderator firm based in Dublin that Facebook has worked with since 2010.

The journalist went undercover at third-party moderation firm CPL Resources

Those questionable practices primarily involve a hands-off approach to flagged and reported content like graphic violence, hate speech, and racist and other bigoted rhetoric from far-right groups. The undercover reporter says he was also instructed to ignore users who looked as if they were under 13 years of age, which is the minimum age requirement to sign up for Facebook in accordance with the Child Online Protection Act, a 1998 privacy law passed in the US designed to protect young children from exploitation and harmful and violent content on the internet. The documentary insinuates that Facebook takes a hands-off approach to such content, including blatantly false stories parading as truth, because it engages users for longer and drives up advertising revenue.

Earlier today, Facebook attempted to preempt negative reactions to the documentary by publishing a blog post and writing a lengthier, more detailed letter in the same vein to the Scotland-based production company Firecrest Films, which produced the documentary in partnership with Channel 4. Facebook says it will be updating its training material for all content moderators, reviewing training practices across all teams and not just CPL, and reviewing the staff at CPL to “ensure that anyone who behaves in ways that are inconsistent with Facebook’s values no longer works to review content on our platform.”

“We take these mistakes incredibly seriously and are grateful to the journalists who brought them to our attention. We have been investigating exactly what happened so we can prevent these issues from happening again,” wrote Monika Bickert, Facebook’s vice president of global policy management, in the blog post. “For example, we immediately required all trainers in Dublin to do a re-training session — and are preparing to do the same globally. We also reviewed the policy questions and enforcement actions that the reporter raised and fixed the mistakes we found.”

Facebook refuses to classify fake news as a violation of its rules

Notably, the documentary is airing just after a congressional hearing about the banning of hyperpartisan accounts on social media. The hearing involved a bipartisan group of lawmakers on the House Judiciary Committee who grilled members of Facebook and other tech companies over why organizations like Alex Jones’ Infowars are allowed to repeatedly publish false information on their platforms. Infowars has pushed dangerous conspiracy theories like Pizzagate, which resulted in a man discharging a firearm in a Washington, DC pizza parlor, and terrorized the parents of elementary Sandy Hook shooting victims by claiming the shooting was staged. The Infowars Facebook page has nearly 1 million likes, while Jones republishes many of its stories on his personal page, which has more than 1.5 million followers.

“If they [InfoWars] posted sufficient content that it violated our threshold, the page would come down,” Bickert, who attended the hearing today, told Rep. Ted Deutch (D-FL). “That threshold varies depending on the severity of different types of violations.” With regard to Infowars, Bickert says it has “not reached the threshold.” Facebook’s approach to fake news, it seems, is not to consider its publication a violation of its terms of service or its content policies.

And as the Channel 4 documentary makes clear, that threshold appears to be an ever-changing metric that has no consistency across partisan lines and from legitimate media organizations to ones that peddle in fake news, propaganda, and conspiracy theories. It’s also unclear how Facebook is able to enforce its policy with third-party moderators all around the world, especially when they may be incentivized by any number of performance metrics and personal biases.

Facebook leadership is terrified of being seen as biased against conservatives

Bickert, in her blog post, attempts to argue against the notion that these pages are good for Facebook’s bottom line. “It has been suggested that turning a blind eye to bad content is in our commercial interests. This is not true. Creating a safe environment where people from all over the world can share and connect is core to Facebook’s long-term success,” she writes. “If our services aren’t safe, people won’t share and over time would stop using them. Nor do advertisers want their brands associated with disturbing or problematic content.”

Bikert’s comments are reasonable. The more likely explanation for its lax moderation of far-right pages is that Facebook leadership is terrified of being seen as biased against conservatives, following the 2016 episode in which members of the company’s in-house news team said they kept conservative viewpoints out of Facebook’s Trending Topics list. (Facebook removed the Trending Topics feature last month after two years of constant tinkering and controversy.)

Meanwhile, Facebook is ramping up efforts in its artificial intelligence division, with the hope that one day algorithms can solve these pressing moderation problems without any human input. Earlier today, the company said it would be accelerating its AI research efforts to include more researchers and engineers, as well as new academia partnerships and expansions of its AI research labs in eight locations around the world. Earlier this month, Facebook acquired a London-based AI startup to help it with natural language processing, and the company also poached a top Google engineer to help it design its own chips, potentially for AI software purposes.

“AI has become so central to the operations of companies like ours, that what our leadership has been telling us is: ‘Go faster. You’re not going fast enough,’” Yann LeCun, Facebook’s chief AI scientist, told The Washington Post regarding the AI acceleration at Facebook. The long-term goal of the company’s AI division is to create “machines that have some level of common sense” and that learn “how the world works by observation, like young children do in the first few months of life.”

Correction: An earlier version of this article stated that the investigative journalist went undercover in the UK. That is incorrect; the third-party Facebook moderation firm is based in Dublin, which is not part of the UK. We regret the error.