Facebook says politicians don’t have to follow its normal posting guidelines, unless they’re running an ad. The company clarified the rules around politicians’ content today, saying that “it is not our role to intervene when politicians speak.” However, they may still have posts removed if the content could “lead to real world violence and harm.”
The rule echoes a similar policy from Twitter, and it builds on previous Facebook rules about newsworthiness and fact-checking. But especially in the lead-up to the 2020 US presidential elections, Facebook is emphasizing that it wants to stay away from political disputes.
Facebook communications head (and former UK deputy prime minister) Nick Clegg explained the policy in a speech at the Atlantic Festival, then expanded on the speech with a blog post. “If someone makes a statement or shares a post which breaks our community standards, we will still allow it on our platform if we believe the public interest in seeing it outweighs the risk of harm,” Clegg writes. “From now on we will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard.”
There are two exceptions: paid advertisements must still follow Facebook’s community guidelines, and speech that incites violence is unacceptable in either case.
It’s not new that politicians say nasty things about each other – that wasn’t invented by Facebook. What is new is that now they can reach people with far greater speed and at a far greater scale. That’s why we draw the line at any speech which can lead to real world violence and harm.
I know some people will say we should go further. That we are wrong to allow politicians to use our platform to say nasty things or make false claims. But imagine the reverse.
Would it be acceptable to society at large to have a private company in effect become a self-appointed referee for everything that politicians say? I don’t believe it would be. In open democracies, voters rightly believe that, as a general rule, they should be able to judge what politicians say themselves.
Saying something can “lead to” violence is pretty nebulous, and Clegg didn’t define “politician” in his speech. Based on this description, Facebook could choose to only remove direct threats against a specific person and protect anyone who declared themselves a political candidate at any level of government — or it could interpret the rules more strictly.
Clegg said that at Facebook, “we are champions of free speech and defend it in the face of attempts to restrict it. Censoring or stifling political discourse would be at odds with what we are about.” That’s implicitly a response to US conservative politicians, who have accused Facebook of political bias based on sketchy and anecdotal evidence.
Posts with “previously debunked content” will still get demoted
As Clegg noted, Facebook already stated last year that it won’t send politicians’ claims to fact-checking organizations. But if a politician shares “previously debunked content,” it will be demoted and paired with fact-checker notes — just like it would be on the rest of the platform. This would theoretically stop politicians from spreading obvious, existing viral hoaxes, which are the primary target of Facebook’s fact-checking program.
Facebook is balancing these rules with its efforts to remove misinformation, “inauthentic” political content from outside governments, and dehumanizing speech that encourages violence against minority groups like Myanmar’s Rohingya. Last year, it removed accounts associated with the Myanmar military, which the UN has accused of committing genocide.
Twitter has a similar policy of treating high-profile political tweets as inherently newsworthy. But it only applies to verified accounts with more than 100,000 followers that represent a government official, elected politician, or political candidate.
Clegg’s speech also denounces the prospect of breaking up Facebook, which is currently facing state, federal, and congressional antitrust investigations in the US. “Pulling apart globally successful American businesses won’t actually do anything to solve the big issues we are all grappling with — privacy, the use of data, harmful content and the integrity of our elections,” he said.
This is probably correct to an extent, but pulling Facebook apart could stop a single company from having so much control over speech that it had to build its own Supreme Court to handle that power responsibly. If Facebook didn’t run a huge chunk of the internet, we wouldn’t care nearly as much about how it handled politicians’ posts.