Skip to main content

Facebook’s misinformation problem goes deeper than you think

In a new report, researchers at Ranking Digital Rights lay out a prescription for fixing Facebook

Share this story

Illustration by Alex Castro / The Verge

In the face of the coronavirus outbreak, Facebook’s misinformation problem has taken on new urgency. On Monday, Facebook joined seven other platforms in announcing a hard line on virus-related misinformation, which they treated as a direct threat to public welfare.

But a report published this morning by Ranking Digital Rights makes the case that Facebook’s current moderation approach may be unable to meaningfully address the problem. According to the researchers, the problem is rooted in Facebook’s business model: data-targeted ads and algorithmically optimized content.

We talked with one of the co-authors, senior policy analyst Nathalie Maréchal, about what she sees as Facebook’s real problem — and what it would take to fix it.


In this report, you’re making the case that the most urgent problem with Facebook isn’t privacy, moderation, or even antitrust, but the basic technology of personalized targeting. Why is it so harmful?

“Targeting policies are extremely vague”

Somehow we’ve ended up with an online media ecosystem that is designed not to educate the public or get accurate, timely, actionable information out there, but to enable advertisers — and not just commercial advertisers, but also political advertisers, propagandists, grifters like Alex Jones — to influence as many people in as frictionless of a way as possible. The same ecosystem that is really optimized for influence operations is also what we use to distribute news, distribute public health information, connect with our loved ones, share mediums, all sorts of different things. And the system works to various extents at all those different purposes. But we can’t forget that what it’s really optimized for is targeted advertising.

What’s the case against targeting specifically?

The main problem is that ad targeting itself allows anyone with the motivation and the money to spend it, which is anyone, really. You can break apart finely tuned pieces of the audience and send different messages to each piece. And it’s possible to do that because so much data has been collected about each and every one of us in service of getting us to buy more cars, buy more consumer products, sign up for different services, and so on. Mostly, people are using that to sell products, but there’s no mechanism whatsoever to make sure that it’s not being used to target vulnerable people to spread lies about the census.

What our research has shown is that while companies have relatively well-defined content policies for advertising, their targeting policies are extremely vague. You can’t use ad targeting to harass or discriminate against people, but there isn’t any kind of explanation of what that means. And there’s no information at all about how it’s enforced.

“Is it optimized for quality? Is it optimized for scientific validity? We need to know”

At the same time, because all the money comes from targeted advertising, that incentivizes all kinds of other design choices for the platform, targeting your interests and optimizing to keep you online for longer and longer. It’s really a vicious cycle where the entire platform is designed to get you to watch more ads and to keep you there, so that they can track you and see what you’re doing on the platform and use that to further refine the targeting algorithms and so on and so forth

So it sounds like your basic goal is to have more transparency over how ads are targeted.

That is absolutely one part of it. Yes.

What’s the other part?

So another part that we talk about in the report is greater transparency and audit ability for content recommendation engines. So the algorithm that determines what the next video on YouTube is or that determines your newsfeed content. It’s not a question of showing the exact code because that would be meaningless to almost everyone. It’s explaining what the logic is, or what it’s optimized for, as a computer scientist would put it.

Is it optimized for quality? Is it optimized for scientific validity? We need to know what it is that the company is trying to do. And then there needs to be a mechanism whereby researchers, different kinds of experts, maybe even an expert government agency further down the line, can verify that the companies are telling the truth about these optimization systems.

You’re describing pretty high-level change in how Facebook works as a platform — but how does that translate to users seeing less misinformation?

Viral content in general shares certain characteristics that are mathematically determined by the platforms. The algorithms look for whether this content is similar to other content that has gone viral before, among other things — and if the answer is yes, then it will get boosted on the theory that this content will get people engaged. Maybe because it’s scary, maybe it will make people mad, maybe it’s controversial. But that gets boosted in a way that content that is perhaps accurate but not particularly exciting or controversial will not get boosted.

Boost first, moderate later

So these things have to go hand in hand. The boosting of organic content has the same driving logic behind it as the ad targeting algorithms. One of them makes money by actually having the advertisers pull out the credit cards, and the other kind makes money because it’s optimized to keeping people online longer.

So you’re saying that if there’s less algorithmic boosting, there will be less misinformation?

I would fine-tune that a little bit and say that if there is less algorithmic boosting that is optimized for the company’s corporate profit margins and bottom line, then yes, misinformation will be less widely distributed. People will still come up with crazy things to put on the internet. But there is a big difference between something that only gets seen by five people and something that gets seen by 50,000 people.

I think the companies recognize that. Over the past couple years, we’ve seen them down rank content that doesn’t quite violate their community standards but comes right up to the line. And that’s a good thing. But they’re keeping the system as it is and then trying to tweak it at the very edges. It’s very similar to what content moderation does. It’s kind of a “boost first, moderate later” logic where you boost all the content according to the algorithm, and then the stuff that’s beyond the pale gets moderated away. But it gets moderated away very imperfectly, as we know.

These don’t seem like changes that Facebook will make on its own. So what would it take politically to bring this about? Are we talking about a new law or a new regulator?

We’ve been asking not just the platforms to be transparent about these kinds of things for more than five years. And they’ve been making progress in disclosing a bit more every year. But there’s a lot more detail that civil society groups would like to see. Our position is that if companies won’t do this voluntarily, then it’s time for the US government, as the government who has jurisdiction over the most powerful platforms, to step in and mandate this kind of transparency as a first step toward accountability. Right now, we just don’t know enough in detail about what, about how, the different algorithmic systems work to confidently regulate the systems themselves. Once we have this transparency, then we can consider smart, targeted legislation, but we’re not there yet. We don’t... we just don’t know enough.

In the short term, the biggest change Facebook is making is the new oversight board, which will be operated independently and supposedly tackle some of the hard decisions that the company has had trouble with. Are you optimistic that the board will address some of this?

I am not because the oversight board is specifically only focused on user content. Advertising is not within its remit. You know, a few people like Peter Stern have said that like, later down the road. Sure, maybe. But that doesn’t do anything to address the “boost first, moderate later” approach. And it’s only going to consider cases where content was taken down and somebody wants to have it reinstated. That’s certainly a real concern, I don’t mean to diminish that in the least, but it’s not going to do anything for misinformation or even purposeful disinformation that Facebook isn’t already catching.

Correction: A previous version of this post stated that the report was the work of New America’s Open Technology Institute. While the report was published on the Open Technology Institute website, it is the sole work of Ranking Digital Rights. The Verge regrets the error.