Skip to main content

The most important part of Facebook's disinformation strategy is what it leaves out

The most important part of Facebook's disinformation strategy is what it leaves out

/

How much should a social network do to fight espionage?

Share this story

Mark Zuckerberg

After months of hand-wringing over Facebook’s role in the 2016 election, the company has finally laid out its response. Over 13 pages, a report today details a comprehensive plan for dealing with what Facebook calls “information operations” — any sustained attempt by an organized force to distort public discourse. The report frames those campaigns as one more undesirable activity on Facebook — akin to spam, malware, or harassment.

It’s a canny and interesting document, short enough that it’s worth reading in full. But while the report lays out a number of new measures, the most striking thing is what it leaves out: a strategy for combating the creation of false and malicious material at its source, and a sense of Facebook's responsibility when genuine users share those links. As described in the report, almost all the important elements of disinformation campaigns are outside of Facebook’s control. When the campaigns do venture onto Facebook, the associated posts tend to behave the same way any piece of news or content would. And while similar campaigns continue across Europe, today’s report suggest there’s no easy fix for the problem — or at least not from Facebook.

Facebook sees the important elements of disinformation campaigns as outside of its control

The report breaks down information operations into three parts: collecting data, creating content, and amplifying that content. In the past year, collection has generally involved stealing an email archive — most those belonging to John Podesta and various members of the DNC. It’s a serious problem, but it usually happens outside of Facebook, so the service’s stepped-up focus on account security isn’t likely to change much. The same is true of creating content, which happened through third-party sites like DCLeaks, WikiLeaks, or, even trickier, through direct leaks to journalists. Facebook isn’t equipped to do much about either of those, so today’s report understandably ignores them.

That leaves us with amplification, the area where Facebook plays the largest role. Facebook is perhaps the best amplification machine the world has ever known, bringing a single person’s message to potentially billions of users — so it’s natural that political astroturfers would use it to get the message out. But according to Facebook’s report, the issue isn’t as simple as spotting automated accounts. Most of the information-op accounts are still created by human beings; they’re just human impostors creating multiple accounts for political purposes. Facebook is making real changes to catch those accounts — expanding the grounds on which an account can be banned and training its algorithms to catch astroturfers — but it’s still unclear how much of an impact those measures will have.

It’s hard to tell the difference between a paid agitator and a genuinely angry person

The report’s most prominent example of astroturfing is a message concerning Montenegro’s prime minister, pasted into multiple groups by the same politically motivated user at the same time. While that’s certainly spammy, it’s not all that different from copypastas that have been passed around the web since long before Facebook. After Trump’s early immigration orders, fake checkpoint rumors circulated through similar copypastas — misleading rumors, to be sure, but far from a misinformation campaign.

Beyond that, it’s often hard to tell the difference between a paid agitator and a genuinely angry person, and the report acknowledges that information ops will often intermingle with everyday citizens. “These groups may initially be populated by fake accounts,” one passage reads, “but can become self-sustaining as others become participants.” If Facebook can’t be sure, it runs the risk of shutting down a group or account just because it’s saying bad things about a powerful person.

That uncertainty shows through later in the report, when Facebook tackles the Guccifer case, in which a hacker distributed emails stolen from the Clinton campaign, an action that US agencies later attributed to Russian intelligence. Facebook breaks the case down into five points: stealing the data, creating sites like DCleaks to host it, creating fake personas on Facebook to amplify it, and creating another set of personas to amplify secondary news coverage. Once all that had been done, the last step comes, in which “organic proliferation of the messaging and data through authentic peer groups and networks was inevitable.” Crucially, the new measures only affect the third and fourth step, and they’re the least important part of the Guccifer story. Once the stolen data was available on DCLeaks and Wikileaks, it was covered by some of the largest news organizations in the country. Once it hit The Washington Post, a few extra Facebook shares were beside the point.

It’s even harder to draw the line in hybrid cases like Pizzagate, a genuine-but-false conspiracy theory spurred by the Podesta hack and carrying an undeniable political slant. The theory itself is false, but every indication is that the people sharing it are genuine. It was carried forward by that inevitable organic proliferation described in the report — something Facebook has tried to avoid policing. If a similar conspiracy arose today, it might be slowed down Facebook’s other efforts to block false news reports, but the Information Operations rules would largely leave it alone.

In some ways, Facebook faces an impossible task. From the beginning, complaints over “fake news” have included everything from sloppy reporting to simple partisanship, and it would be impossible for any one system to address them all. Even confining the issue to explicit propaganda campaigns forces the company to draw a line between legitimate politics and covert espionage, a line that’s particularly hard to draw as accusations of Russian espionage become commonplace. Even the most heavy-handed measures — like blocking articles that draw on espionage-linked data dumps — wouldn’t prevent data from spreading outside of Facebook. It’s easy to understand why the company is wary of going too far.

But what we’re left with is the real prospect of foreign powers manipulating public discourse, and no clear way to fix it. As with false reporting, Facebook has laid out a plan for more aggressive action against fake accounts, but it’s running up against more serious limits. Even more than false information, disinformation campaigns happen largely outside of Facebook’s control. What should be a reassuring document ends up as an admission of defeat. This is what Facebook can do to fight the problem — and what it can’t do. The bigger message may be that if we want to protect public discourse, we’ll need more than algorithms.