Skip to main content

Leaked moderation guidelines reveal how Facebook approaches handling graphic content

Leaked moderation guidelines reveal how Facebook approaches handling graphic content

/

As Facebook grows, it will need to continually scale up its efforts to address graphic content.

Share this story

Photo by Vjeran Pavic / The Verge

Earlier today, The Guardian published a series of reports about Facebook’s internal rules on moderating graphic content, providing new insight into how the company determines what its users can post. The series paints a picture of a social media network that is drowning in content, using rules that are at times seemingly contradictory.

The Guardian’s series, Facebook Files, reveal some of the site’s internal manuals concerning credible threats of violence, non-sexual child abuse, graphic violence, and cruelty to animals. The newspaper explains that it has reviewed more than 100 internal “training manuals, spreadsheets and flowcharts,” that help guide the site’s moderators when content is reported.

Facebook’s documents reveal a line that the company is trying to navigate: provide a platform for free speech while also avoiding real-world harm. The site utilizes some automated systems to proactively eliminate content, such as child sexual abuse or terrorism, but what’s left falls to teams of moderators. In the Credible Violence documents, the manual notes that “people commonly express distain or disagreement by threatening or calling for violence in generally facetious and unserious ways.” It also provides examples of where some statements are acceptable to keep on the site (“I’m going to kill you John!”) and what should be removed (“I’m going to kill you John, I have the perfect knife to do it!”)

Moderators have to determine whether a person is blowing off steam or making a serious threat

The guidelines ask moderators to determine the difference between someone blowing off steam, and a serious threat, pointing to instances where posts detailing specific threats, timing, and methods are given priority over more general ones. The site also outlines specific groups of vulnerable individuals (such as heads of state or specific police officers) and groups (homeless or Zionists), which gets automatically deleted or escalated.

There are significant gray areas, however. The site’s guidelines specifically state that “photos and videos documenting animal abuse,” is permitted in order to raise awareness. While the site does “not action” images of child abuse, videos of said abuse is marked as disturbing. Users attempting to harm themselves is also allowed, because it “doesn’t want to censor or punish people in distress who are attempting suicide.”

The flood of content means that moderators feel overwhelmed, and can make mistakes

According to The Guardian, moderators review millions of reports of suspected content, and because of the amount, “moderators often feel overwhelmed by the number of posts they have to review – and they make mistakes, particularly in the complicated area of permissible sexual content.” Facebook’s head of global policy management, Monika Bickert, ‎told The Guardian that the company’s diverse audience means that there’s wide range of what’s considered acceptable behavior, and that some comments might violate Facebook’s policies in some contexts, but not others.

In a statement provided to The Verge, Bickert noted that user safety is of upmost importance:

Keeping people on Facebook safe is the most important thing we do. Mark Zuckerberg recently announced that over the next year, we'll be adding 3,000 people to our community operations team around the world — on top of the 4,500 we have today — to review the millions of reports we get every week, and improve the process for doing it quickly. In addition to investing in more people, we're also building better tools to keep our community safe. We’re going to make it simpler to report problems to us, faster for our reviewers to determine which posts violate our standards and easier for them to contact law enforcement if someone needs help.

In recent months, high-profile incidents, such as the removal of an image from the Vietnam War, and a killing broadcast on Facebook Live, prompted Facebook to adjust policies or to hire an additional moderators to screen content. Such incidents have highlighted the ease of which such content can be posted and shared with users, and that the company’s responses have been lacking. Earlier this month, Facebook reported that it has just shy of two billion users, which means that it will need to continually scale up its protections for its users.