Skip to main content

NYU researchers find no evidence of anti-conservative bias on social media

NYU researchers find no evidence of anti-conservative bias on social media

/

Researchers say claims of social media censoring conservatives are ‘disinformation’

Share this story

Donald Trump’s Twitter Account Suspended
Photo by Jakub Porzycki/NurPhoto via Getty Images

A new report finds that claims of anti-conservative bias on social media platforms are not only untrue but serve as a form of disinformation. The report from NYU’s Stern Center for Business and Human Rights says not only is there no empirical finding that social media companies systematically suppress conservatives, but even reports of anecdotal instances tend to fall apart under close scrutiny. And in an effort to appear unbiased, platforms actually bend over backward to try to appease conservative critics.

“The contention that social media as an industry censors conservatives is now, as we speak, becoming part of an even broader disinformation campaign from the right, that conservatives are being silenced all across American society,” the report’s lead researcher Paul Barrett said in an interview with The Verge. “This is the obvious post-Trump theme, we’re seeing it on Fox News, hearing it from Trump lieutenants, and I think it will continue indefinitely. Rather than any of this going away with Trump leaving Washington, it’s only getting more intense.”

The researchers analyzed data from analytics platforms CrowdTangle and NewsWhip and existing reports like the 2020 study from Politico and the Institute for Strategic Dialogue, all of which showed that conservative accounts actually dominated social media. And they drilled down into anecdotes about bias and repeatedly found there was no concrete evidence to support such claims.

Looking at how claims of anti-conservative bias developed over time, Barrett says, it’s not hard to see how the “anti-conservative” rhetoric became a political instrument. “It’s a tool used by everyone from Trump to Jim Jordan to Sean Hannity, but there is no evidence to back it up,” he said.

The report notes that the many lawsuits against social media platforms have “failed to present substantial evidence of ideological favoritism — and they have all been dismissed.”

“Twitter was taking the lead and setting the example.”

This is not to suggest that Twitter, Facebook, YouTube, and others have not made mistakes, Barrett added; they have. “They tend to react to crises and adjust their policies in the breach, and that’s led to a herky-jerky cadence of how they apply their policies,” he said.

Twitter in particular has historically been more hands-off with moderation, proud of its image as a protector of free speech. But all that changed in 2020, Barrett said, in response to the pandemic and the anticipation that there would be a bitter election campaign cycle. “Twitter shifted its policies and began much more vigorous policing of content around the pandemic and voting in general,” he notes. Among social media companies, “Twitter was taking the lead and setting the example.”

And in the aftermath of the January 6th riots at the Capitol, Barrett says, Twitter and other platforms were well within their policies against inciting violence when they banned former President Trump.

The report has several recommendations for social media platforms going forward. First: better disclosure around content moderation decisions, so the public has a fuller understanding of why certain content and users might be removed. The report authors also want platforms to allow users to customize and control their social media feeds.

Hiring more human moderators is another key recommendation, and Barrett acknowledges that the job of content moderator is highly stressful. But having more moderators — hired as employees, not contractors — would allow Facebook and other platforms to spread out moderation of the most challenging content among more people.

The report also recommends Congress and the White House work with tech companies to dial back some of the hostility between Washington and Silicon Valley and work on responsible regulation. He doesn’t recommend repealing Section 230, however. Instead, he’d like to see it amended.

“Make it conditional: If companies want to enjoy the benefits of 230, they need to adopt responsible content moderation policies. Let people see how their algorithms work, and why certain people see material others don’t,” he said. “No one expects them to show every last line of code, but people should be able to understand what goes into the decisions being made about what they’re seeing.”