Last week, the Everyday Sexism Project and other activists asked Facebook to address a long-standing issue: the proliferation of pages that celebrate beating, raping, or killing women. In addition to sending an open letter, the group embarked on an ambitious campaign to back up its words, urging advertisers whose ads appeared next to offending pages to boycott the site. The problem, they said, wasn't just that hateful pages like were being allowed to stay up. It was that Facebook aggressively moderated things like nudity but turned a blind eye to violence against women, taking action only when protests grew too loud. Now, after Nissan UK and several other companies temporarily withdrew their ads, Facebook has promised to update its policies and moderation.
"In recent days, it has become clear that our systems to identify and remove hate speech have failed to work as effectively as we would like, particularly around issues of gender-based hate," the Facebook Safety team wrote today. "In some cases, content is not being removed as quickly as we want." In response, Facebook will begin rolling out a five-point plan designed to fix the problem. Among other things, it will review its terms of service for potential changes, change moderation teams' training to help them identify harmful content, and reach out to anti-discrimination groups for input. But perhaps the most unusual is a proposal to make people "stand behind the content they create."
A few months ago we began testing a new requirement that the creator of any content containing cruel and insensitive humor include his or her authentic identity for the content to remain on Facebook. As a result, if an individual decides to publicly share cruel and insensitive content, users can hold the author accountable and directly object to the content. We will continue to develop this policy based on the results so far, which indicate that it is helping create a better environment for Facebook users.
"Our systems to identify and remove hate speech have failed to work as effectively as we would like."
All sites with user-generated content walk a fine line between harmfully limiting speech and allowing privileged groups to attack marginalized ones without consequences. As valuable as the internet's vast freedom of expression is, pretty much all sites have some kind of guidelines, and Facebook already prohibits things it defines as harmful content or hate speech against a particular group. But until now, it's operated largely on a squeaky-wheel approach, seemingly applying guidelines inconsistently or only taking a complaint seriously when it's accompanied by a larger backlash. Because of its size, moderation is often based on user complaints, which means that a dedicated group of trolls can sometimes get an innocuous page taken down while an offensive one is left up.
Facebook will likely still have trouble drawing a line between unprotected "harmful content" and that which falls under the banner of "controversial humor," a category that is allowed on the site. During the boycott, Women, Action, & The Media posted a series of screenshots from pages they thought should be the former, many of them graphically violent promotions of beating women. But it's a sign that Facebook, like other companies, is willing to respond to criticism — even if it has to be backed up by boycotts.