Skip to main content

Go read about how Facebook bends its rules for world leaders

Go read about how Facebook bends its rules for world leaders

/

‘How is Facebook enforcing its rules, and who is set up to benefit?’

Share this story

facebook stock art
Illustration by Alex Castro / The Verge

Facebook is a basically unprecedented piece of technology. Every month, a single platform gives over 2.5 billion people a specially curated selection of advertisements, status updates from friends and family, and automated suggestions for making new connections. Critics often focus — fairly — on that combination of scale and automation, arguing that Facebook’s algorithms promote false news and extremist content.

But a feature story from The New Yorker paints a slightly different picture of Facebook’s problems — one that’s not simply rooted in its design or size. Building on extensive earlier reporting of Facebook’s moderation efforts, author Andrew Marantz talks to former employees about why hate speech and misinformation can spread on the platform. Over and over, a straightforward answer emerges: Facebook has twisted and weakened its rules to give powerful users a pass.

Marantz certainly points to issues with Facebook’s automation and its failure to craft clear standards for content moderators. But the story’s most vivid anecdotes involve Facebook clearly acknowledging that a problem exists, then evading it to avoid provoking political and commercial blowback.

Kicking off powerful users could create political blowback

In 2017, as US President Donald Trump was pushing to ban Muslims from entering the country, Facebook reportedly added a loophole to its rules banning hateful speech against a religious group. When Brazilian President Jair Bolsonaro said indigenous citizens were still “evolving and becoming” human beings, Facebook rejected a plea to remove the content as dehumanizing speech. If moderators flagged threatening posts from the far-right group Britain First, they risked getting their “quality score” docked when an auditor reversed the ban — because the group had a special “shielded” status.

Facebook doesn’t necessarily sympathize with the far right. But figures like Bolsonaro are some of Facebook’s biggest power users and success stories. As one former employee puts it, “it’s awkward for a business to go from a posture of ‘Please, sir, use our product,’ to ‘Actually, sir, you’re now in trouble for using our product.’” In this light, its actions look less like a principled libertarian stance and more like Twitch soft-pedaling a ban on a popular streamer.

Many of the incidents in the article were first reported elsewhere. But Marantz’s story puts them together at a pivotal time, just weeks before the US presidential election. And a couple of new studies have backed up their underlying implications. Researchers at Harvard examined patterns of misinformation around voting, and they concluded that much of the spread was “elite-driven” — not pushed by anonymous trolls or scammers. Similarly, a Cornell University study concluded that Trump himself was a key COVID-19 misinformation source. (Facebook has removed occasional and particularly controversial false statements from Trump.)

“The right question isn’t ‘Should Facebook do more or less?’”

Facebook CEO Mark Zuckerberg has raised real concerns about a powerful corporation policing public discourse. But Color of Change president Rashad Robinson points out that Facebook has willingly written plenty of rules about political speech. “The right question isn’t ‘Should Facebook do more or less?’ but ‘How is Facebook enforcing its rules, and who is set up to benefit from that?’” Robinson says. Over the summer, Color of Change used this message to help lead an advertiser boycott of Facebook — getting companies including Unilever and Coca-Cola to suspend their campaigns on the platform.

Likewise, it’s legitimately doubtful that Facebook could perfectly enforce its rules with any amount of human or AI effort. (Techdirt founder Mike Masnick has referred to this as “Masnick’s Impossibility Theorem.”) Even the most effective system might do something like ban an iconic war photo because it features nudity. But as a chaotic election approaches, talking about banning big, badly behaved accounts is a potentially more productive and massively simpler option — albeit one that might damage Facebook’s bottom line.