Skip to main content

Facebook users rarely saw voting misinformation labeled ‘false,’ says study — especially if it came from Trump

Facebook users rarely saw voting misinformation labeled ‘false,’ says study — especially if it came from Trump

/

A look at Facebook’s complex flagging system

Share this story

Illustration by Alex Castro / The Verge

Facebook addressed false election information with understated warnings like “missing context” rather than more direct flags, research from The Markup suggests — and the company appeared hesitant to label false or misleading statements from now-banned former President Donald Trump.

The Markup has released new data from its Citizen Browser, which captures snapshots of what 2,200 volunteers see in their Facebook feeds. Its report covers December and January, two contentious months where then-President Trump falsely claimed victory in the election. Facebook implemented special banners and labeling systems to inform users about election results and fact-check false posts. The Citizen Browser Project offers a window into a seemingly complicated and hyper-nuanced process.

According to The Markup, only a minority of participants saw any flagged content: 330 people were served a total of 682 posts labeled “false, devoid of context, or related to an especially controversial issue, like the presidential election.” The vast majority of these labels just directed people toward voting results, and they were added to “anything election-related.”

The vast majority of people didn’t see any election-related flags at all

Most of the remaining flags were applied to Trump posts, and some of those posts contained clearly false information, like a claim that it was “statistically impossible” for President Joe Biden to win the election. But The Markup notes that these posts were never labeled “false” or “misleading.” Instead, Facebook did things like add generic warnings about election integrity. Overall, “false” labels only appeared on a total of 12 posts — including ones “linking Bill Gates to a world domination plot, or one that said ‘Biden did not win legally.’” Facebook was more likely to use a flag for “missing context,” which appeared 38 times.

Facebook told The Markup that “we don’t comment on data that we can’t validate, but we are looking into the examples shared.” And The Markup notes that Facebook claims to reduce the spread of false claims, so users may have been simply prevented from seeing many posts flagged as “false.” However, it also describes “missing context” flags getting applied to posts that clearly but indirectly endorsed false conspiracy theories — something that makes it “much less obvious that the post was untrue.”

Overall, the Citizen Browser Project only offers a limited picture of what’s going on inside the platform, although it’s still valuable for filling in gaps left by Facebook’s own analytics tool CrowdTangle. But the research does suggest that, as The Markup writes, Facebook treated Trump posts with “kid gloves” until it outright suspended him following the January 6th attack on the US Capitol. That decision will be reviewed by the Facebook Oversight Board in the near future — and if it’s reversed, Facebook may end up facing the same moderation questions again.