Skip to main content

What we know, and what we don’t, about Facebook’s effort to reduce hoaxes

What we know, and what we don’t, about Facebook’s effort to reduce hoaxes

/

What is Facebook actually doing about fake news?

Share this story

Facebook New News Feed (STOCK)

Nearly a week after the US presidential election, Facebook continues to be roundly criticized for its role in spreading misinformation. In the wake of a close contest, some have said Facebook’s lax attitude toward hoaxes swung the election to Donald Trump. The controversy has renewed calls for the company to take its fake news problem more seriously.

Today Gizmodo published a much-discussed story reporting that Facebook had built and abandoned a tool to reduce the spread of fake news stories over fears that the tool would disproportionately affect conservative news sites. Facebook acknowledged that the company has been reconsidering its approach to combating hoaxes, but says the story’s central allegation is false. So what’s really going on?

Here’s what we know: Facebook did build a tool that identifies fake news stories, and launched it in 2015. At the time, the company mentioned some of the ways it detects fake news. “People often share these hoaxes and later decide to delete their original posts after they realize they have been tricked,” the company wrote. “These types of posts also tend to receive lots of comments from friends letting people know this is a hoax, and comments containing links to hoax-busting websites.”

Facebook built a tool for identifying fake news in 2015

With the update, Facebook introduced a feature to let users report stories as false. The tool is still active today. Unfortunately, it’s ineffective. A BuzzFeed investigation this year found that top right-wing Facebook news outlets published false or misleading stories 38 percent of the time, as did 20 percent of top left-wing outlets.

The tool was arguably not designed to be effective. It came with many caveats: “We are not removing stories people report as false and we are not reviewing content and making a determination on its accuracy,” the company said. The only penalty for publishing a story arguing that Hillary Clinton is, in fact, a West African giraffe would be “reduced distribution.” A highly viral post with “reduced distribution” might still be shared hundreds of thousands of times, according to Facebook’s own guidelines. All of which has made CEO Mark Zuckerberg’s arguments that Facebook could not have influenced the election unsatisfying: 44 percent of Americans get their news from Facebook, and Zuckerberg previously promoted the company’s efforts to spur populist uprisings.

What makes Gizmodo’s story a whoa-if-true moment is its suggestion that Facebook might at last be coming to grips with its role as the nation’s de facto daily newspaper replacement. The company developed “a planned News Feed update that would have identified fake or hoax news stories,” author Michael F. Nuñez writes, but it “disproportionately impacted right-wing news sites by downgrading or removing that content from people’s feeds.”

Is Facebook reconsidering its role in news distribution?

The story does not describe how the update would have worked. Did it simply “identify” that a fake link had been shared? Or did it “identify” fake news stories visually, with a red border or “this is fake” stamp? And what did it do with the fake links that it found? To date, Facebook has been unwilling to do anything with fake news stories beyond reducing their distribution by some unspecified amount. Has the company started to reconsider?

For its part, Facebook says Gizmodo’s source was likely describing an actual update made to the News Feed this summer to reduce “clickbait.” (To Facebook, “clickbait” is a story that is true on some level, but presented in a misleading or unsatisfying way. You’ll never believe why some people are saying Hillary is a giraffe!) As Facebook tells it, it tested two approaches to reduce clickbait. One was the same approach it took with fake news a year ago — asking users to report it. The other was algorithmic — it relies on machine learning to reduce clickbait by filtering common keywords and tropes. The latter approach proved better at reducing clickbait in the News Feed, Facebook says, and so the fake-news-based News Feed update was shelved.

Gizmodo says Facebook chose the algorithm over user reporting for partisan reasons. “We stand by our reporting, and the issue of whether the update also affected clickbait is not at the crux of our story,” Nuñez told me.

Facebook denies this. “As we said in the statement, we did not build and withhold any News Feed changes based on their potential impact on any one political party,” a spokesperson told me.

So what do we know for sure? A few things seem beyond dispute.

Any move to reduce fake stories on Facebook would disproportionately affect conservatives

One, according to my own conversations with people at the company, Facebook has indeed been holding high-level conversations this year about its role in distributing news. Those went into overdrive after allegations earlier this year that Facebook’s Trending Topics module was biased against conservative news — which led to the firing of human editors and the replacement of an occasionally useful product with an information-free jumble of popular keywords. (It also led to the further spread of fake news!)

Two, any move to reduce the spread of fake stories on Facebook would disproportionately affect conservative news sites, because conservative news sources publish more fake stories.

Three, when an earlier generation of media companies acted as gatekeepers against false and misleading stories, they created a market for alternative media. That led to the rise of conservative talk radio, Fox News, and (most recently) the alt-right. Facebook’s worst nightmare is that conservatives stop seeing it as a neutral platform, and create a fair-and-balanced social network of their own.

This is not an abstract threat: Andrew Torba, the Trump-supporting startup CEO who cried censorship when he was thrown out of Y Combinator last week, is currently building a conservative-friendly network named Gab.AI. So whatever the truth of Gizmodo’s story, it seems fair to say that Facebook has real incentives to keep the network humming with conservative-friendly content. Even if it is occasionally made up out of whole cloth.

Facebook’s foot-dragging is hurting it

Fourth, Facebook’s foot-dragging on fake news is hurting the company among the broad cross-section of users who rely on it to supply their daily reality. The longer that Facebook remains a right-leaning playground for hoaxes and conspiracy theories, the more opportunity there is for a company to disrupt Facebook from the left. Over the weekend, Zuckerberg shirked responsibility from sorting fact from fiction. “I am confident we can find ways for our community to tell us what content is most meaningful,” he wrote, “but I believe we must be extremely cautious about becoming arbiters of truth ourselves.”

It’s impossible to say with certainty why Facebook rejected a News Feed change that, according to at least once source, prevented the spread of fake news. But it does seem clear that any change could have an outsized, and consequential, effect on conservative users. Arbitrating truth on a platform that serves the entire world will doubtless be difficult, expensive, and frustrating work. But if Facebook intends to provide common ground for users of every political persuasion, it will also be essential.