Skip to main content

People on Twitter like passing on lies better than they like retweeting the truth

People on Twitter like passing on lies better than they like retweeting the truth

/

And bots aren’t necessarily to blame

Share this story

twitter icon

Untruthful news is 70 percent more likely to be retweeted on Twitter than true news, according to new research — and bots may not be to blame.

In a paper published today in the journal Science, researchers analyzed the spread of all the stories verified (as either true or false) by six fact-checking organizations from 2006 to 2017. The analysis shows that false political news spreads more quickly than any other kind, like news about natural disasters or terrorism, and predictably, it spikes during events like the 2012 and 2016 US presidential elections. (The researchers deliberately use the term “false news” because “fake news” is too politicized, they write.) Though the Twitter accounts that spread untruthful stories were likely to have fewer followers and tweet less than those sharing real news, false news still spreads quickly because it is seen as novel, the study says.

It’s humans that are responsible.

First, the researchers went to six fact-checking organizations and pulled out all the news stories they had verified as true or false. (The six orgs were Snopes, PolitiFact, FactCheck, Truth or Fiction, Hoax Slayer, and Urban Legends.) Next, the researchers — who had access to the entire Twitter archive — looked for mentions of these stories on the social media site. Each time they found a mention, they would try to determine whether that mention was the original tweet, or if it was replying to or repeating a different tweet. That way, they could trace the origin of the story, and then track the ways that the information spread through Twitter. Ultimately, their dataset included about 126,000 stories tweeted by 3 million people more than 4.5 million times. 

Their analysis shows that true news rarely spread to more than 1,000 people, but the top 1 percent of false news could spread to as many as 100,000. This wasn’t because the accounts tweeting false news were particularly influential, but because we’re more likely to share news that seems interesting and new. “Novel information is thought to be more valuable than redundant information,” says study co-author Sinan Aral, a professor of management at the Massachusetts Institute of Technology. “People who spread novel information gain social status because they’re thought to be ‘in the know’ or to have inside information.”

To test this hypothesis, Aral’s team analyzed the emotional content of these stories and people’s responses to them. As expected, the false news stories were seen as more surprising and provoked more disgust. Finally, the scientists used a bot detection algorithm, and found that bots helped spread false news as well as true news at the same rate. “So bots could not explain this massive difference in the diffusion of true and false news we’re finding in our data,” says Aral, “it’s humans that are responsible.”

Of course, it can be very difficult to tell if a bot is really a bot, says Joan Donovan, a sociologist who studies media manipulation at the Data & Society Research Institute and who was not involved in the study. It takes a very high standard of proof to really know that there isn’t a human behind it. Still, she adds, the paper makes it clear that we need to be very serious about content moderation. “Even if bots aren’t the problem and it’s people and networks moving information, this paper gives us better leverage and insight into what we need to evaluate,” says Donovan.

Next, Aral and his team want to further understand the spread of false news and look for possible solutions. He suggests labeling news sources based on how factual they are (similar to the ambitions of the startup NewsGuard), or having companies like Twitter and Facebook look more closely at how they can build their platforms or algorithms to discourage the spread of false news. (Facebook is already trying to do this.)

It would also be worthwhile, Donovan says, to learn more about how false news spreads over time. “If you take any individual rumor on a platform, we know that there are windows of misinformation that happen within the first 24 hours,” she says. “So one of the things I would like to know is, over time, which parts of the rumors continue to persist, and which groups are likely to hold onto misinformation that becomes a conspiracy theory?”