Skip to main content

Why Elon Musk should read Facebook’s latest transparency report

Why Elon Musk should read Facebook’s latest transparency report

/

If he’s going to own a social network, he needs to understand content moderation

Share this story

Illustration by William Joel / The Verge

Today, let’s talk about Facebook’s latest effort to make the platform more comprehensible to outsiders — and how its findings inform our current, seemingly endless debate over whether you can have a social network and free speech, too.

Start with an observation: last week, Pew Research reported that large majorities of Americans — Elon Musk, for example! — believe that social networks are censoring posts based on the political viewpoints they express. Here’s Emily A. Vogels:

Rising slightly from previous years, roughly three-quarters of Americans (77%) now think it is very or somewhat likely that social media sites intentionally censor political viewpoints they find objectionable, including 41% who say this is very likely.

Majorities across political parties and ideologies believe these sites engage in political censorship, but this view is especially widespread among Republicans. Around nine-in-ten Republicans (92%), including GOP leaners, say social media sites intentionally censor political viewpoints that they find objectionable, with 68% saying this is very likely the case. Among conservative Republicans, this view is nearly ubiquitous, with 95% saying these sites likely censor certain political views and 76% saying this is very likely occurring.

One reason I find these numbers interesting is that of course social networks are removing posts based on the viewpoints they express. American social networks all agree, for example, that Nazis are bad and that you shouldn’t be allowed to post on their sites saying otherwise. This is a political view, and to say so should not be controversial.

Of course, that’s not the core complaint of most people who complain about censorship on social networks. Republicans say constantly that social networks are run by liberals, have liberal policies, and censor conservative viewpoints to advance their larger political agenda. (Never mind the evidence that social networks have generally been a huge boon to the conservative moment.)

they’re not answering the question you actually asked

And so when you ask people, as Pew did, whether social networks are censoring posts based on politics, they’re not answering the question you actually asked. Instead, they’re answering the question: for the most part, do the people running these companies seem to share your politics? And that, I think more or less explains 100 percent of the difference in how Republicans and Democrats responded.

But whether on Twitter or in the halls of Congress, this conversation almost always takes place only at the most abstract level. People will complain about individual posts that get removed, sure, but only rarely does anyone drill down into the details: on what categories of posts are removed, in what numbers, and in what the companies themselves have to say about the mistakes they make.

That brings us to a document that has a boring name, but is full of delight for those of us who are nosy and enjoy reading about the failures of artificial-intelligence systems: Facebook’s quarterly community standards enforcement report, the latest of which the company released today as part of a larger “transparency report” for the latter half of 2021.

An important thing to focus on, whether you’re an average user worried about censorship or recently bought a social network promising to allow almost all legal speech, is what kind of kind of speech Facebook removes. Very little of it is “political,” at least in the sense of “commentary about current events.” Instead, it’s posts related to drugs, guns, self-harm, sex and nudity, spam and fake accounts, and bullying and harassment.

To be sure, some of these categories are deeply enmeshed in politics — terrorism and “dangerous organizations,” for example, or what qualifies as hate speech. But for the most part, this report chronicles stuff that Facebook removes because it’s good for business. Over and over again, social products find that their usage shrinks when even a small percentage of the material they host includes spam, nudity, gore, or people harassing each other.

Facebook removes these things because it’s good for business

Usually social companies talk about their rules in terms of what they’re doing “to keep the community safe.” But the more existential purpose is to keep the community returning to the site at all. This is what makes Texas’ new social media law, which I wrote about yesterday, potentially so dangerous to platforms: it seemingly requires them to host material that will drive away their users.

At the same time, it’s clear that removing too many posts also drives people away. In 2020, I reported that Mark Zuckerberg told employees that censorship was the No. 1 complaint of Facebook’s user base.

A more sane approach to regulating platforms would begin with the assumption that private companies should be allowed to establish and enforce community guidelines, if only because their companies likely would not be viable without them. From there, we can require platforms to tell us how they are moderating, under the idea that sunlight is the best disinfectant. And the more we understand about the decisions platforms make, the smarter the conversation we can have about what mistakes we’re willing to tolerate.

As the content moderation scholar Evelyn Douek has written: “Content moderation will always involve error, and so the pertinent questions are what error rates are reasonable and which kinds of errors should be preferred.”

Facebook’s report today highlights two major kinds of errors: ones made by human beings, and ones made by artificial intelligence systems.

Start with the humans. For reasons that the report does not disclose, between the last quarter of 2021 and the first quarter of this one, its human moderators suffered “a temporary decrease in the accuracy of enforcement” on posts related to drugs. As a result, the number of people requesting appeals rose from 80,000 to 104,000, and Facebook ultimately restored 149,000 posts that had been wrongfully removed.

Humans arguably had a better quarter than Facebook’s automated systems, though. Among the issues with AI this time around:

  • Facebook restored 345,600 posts that had been wrongfully removed for violating policies related to self harm, up from 95,300 the quarter earlier, due to “an issue which caused our media-matching technology to action non-violating content.”
  • The company restored 414,000 posts that had been wrongfully removed for violating policies related to terrorism, and 232,000 related to organized hate groups, apparently due to the same issue.
  • The number of posts it wrongfully removed for violating policies related to violent and graphic content last quarter more than doubled, to 12,800, because automated systems incorrectly took down photos and videos of Russia’s invasion of Ukraine.

Of course, there was also good evidence that automated systems are improving. Most notably, Facebook took action on 21.7 million posts that violated policies related to violence and incitement, up from 12.4 million the previous quarter, “due to the improvement and expansion of our proactive detection technology.” That raises, uh, more than a few questions about what escaped detection in earlier quarters.

Still, Facebook shares much more about its mistakes than other platforms do; YouTube, for example, shares some information about videos that were taken down in error, but not by category and without any information about the mistakes that were made.

And yet still there’s so much more we would benefit from knowing — from Facebook, YouTube, and all the rest. How about seeing all of this data broken down by country, for example? How about seeing information about more explicitly “political” categories, such as posts removed for violating policies related to health misinformation? And how about seeing it all monthly, rather than quarterly?

Truthfully, I don’t know that any of that would do much to shift the current debate about free expression. Partisans simply have too much to gain politically by endlessly crying “censorship” whenever any decision related to content moderation goes against them.

But I do wish that lawmakers would at least spend an afternoon enmeshing themselves in the details of a report like Facebook’s, which lays out both the business and technical challenges of hosting so many people’s opinions. It underscores the inevitability of mistakes, some of them quite consequential. And it raises questions that lawmakers could answer via regulations that might actually withstand 1st Amendment scrutiny, such as what rights to appeal a person should have if their post or account are removed in error.

millions of Facebook users are seeing their posts removed in error

There’s also, I think, an important lesson for Facebook in all that data. Every three months, according to its own data, millions of its users are seeing their posts removed in error. It’s no wonder that, over time, this has become the top complaint among the user base. And while mistakes are inevitable, it’s also easy to imagine Facebook treating these customers better: explaining the error in detail, apologizing for it, inviting users to submit feedback about the appeals process. And then improving that process.

The status quo, in which those users might see a short automated response that answers none of their questions, is a world in which support for the social network — and for content moderation in general — continues to decline. If only to preserve their businesses, the time has come for platforms to stand up for it.