Skip to main content

New rules challenge Google and Facebook to change the way they moderate users

New rules challenge Google and Facebook to change the way they moderate users

/

The Santa Clara Principles set a new standard for platform moderation, but will Google and Facebook take notice?

Share this story

Illustration by William Joel / The Verge

Over the past several years, content moderation has reached a breaking point. We’ve seen all manner of ugliness thrive on platforms like Facebook, Twitter, and YouTube, whether it’s coordinated harassment, impostor accounts, foreign political influence, or bizarre algorithmic chum. At the same time, inconsistent and sometimes heavy-handed moderation has become an increasingly partisan issue, with conservative celebrities appearing before Congress to make murky claims about censorship. In both cases, the loss of trust is palpable, fueled by an underlying lack of transparency. A huge proportion of the world’s speech happens on closed platforms like Facebook and YouTube, and users still have little control or awareness of the rules governing that speech.

Today, a coalition of nonprofit groups tried to address that gap with a list of basic moderation standards called the Santa Clara Principles on Transparency and Accountability in Content Moderation, designed as a set of minimum standards for how to treat user content online. The final product draws on work from the American Civil Liberties Union, Electronic Frontier Foundation, Center for Democracy & Technology, and New America’s Open Technology Institute, as well as a number of independent experts. Together, they call for more thorough notice when a post is taken down, a stronger appeals process, and new transparency reporting around the total number of posts and accounts suspended.

They’re simple measures, but they give users far more information and recourse than they currently get on Facebook, YouTube, and other platforms. The result is a new road map for platform moderation — and an open challenge to any company moderating content online.

“What we’re talking about is basically the internal law of these platforms.”

Under the Santa Clara rules, any time an account or other content is taken down, the user would get a specific explanation of how and why the content was flagged, with a reference to the specific guideline they had violated. The user could also challenge the decision, presenting new evidence to a separate human moderator on appeal. Companies would also present a regular moderation report modeled after current reports on government data requests, listing the total number of accounts flagged and the justification for each flag.

“What we’re talking about is basically the internal law of these platforms,” says Open Technology Institute director Kevin Bankston, who worked on the document. “Our goal is to make sure that it’s as open a process as it can possibly be.”

So far, companies have been silent on the new guidelines. Google and Twitter declined to comment on the new rules; Facebook did not respond to multiple requests.

But while companies have yet to weigh in on the Santa Clara rules, some are inching similar measures on their own. Facebook published its full moderation guidelines for the first time last month, laying out specific rules on violence and nudity that had guided decisions for years. The company also created its first formal appeals process for users who believe they’ve been suspended in error.

YouTube is closer to complying with the rules, although it still falls short on transparency. The platform already has a notice and appeals process, and its guidelines have been public from the beginning. YouTube released its first quarterly moderation report in April, detailing the 8.2 million videos removed during the last quarter of 2017. But while the report breaks out the policies involved in human flags, it doesn’t give the same detail if the content was flagged by the automated systems responsible for the bulk of the content removal on YouTube.

The Santa Clara document is limited to process issues, sidestepping many of the thorniest questions around moderation. The rules don’t speak to what content should be removed or when a given post can be justly considered a threat to user safety. It also doesn’t deal with political speech or carve-outs for newsworthiness, like Twitter’s controversial world leaders policy.

But many of the experts involved say the rules are more of a minimum set of standards than a final list of demands. “I’ve been very critical of some specific policies — from nudity to terrorism,” says Jillian C. York, who worked on the rules for EFF. “Ultimately, though, I don’t believe content moderation is going away anytime soon, and so mediating it through transparency and due process is a great start.”