As a privacy and public policy adviser, Dipayan Ghosh once worked to improve Facebook from the inside. Now a fellow at the New America Foundation, from 2015 to 2017 Ghosh helped to develop Facebook’s public positions on issues related to privacy, telecommunications, and ethical algorithms.
Ghosh, who previously served as a White House technology adviser under President Barack Obama, was troubled by the results of the 2016 election, and the role Facebook played in influencing voters. He quit his job at Facebook early last year and became a fellow at New America, a think tank focused on foreign policy, technology, and the economy. Together with Ben Scott, a senior adviser at New America, last week Ghosh published “Digital Deceit: The Technologies Behind Precision Propaganda on the Internet.”
In it, Ghosh and Scott argue that Russian interference in the 2016 election was the tip of a very large iceberg — and that things are about to get worse. Here’s their thesis in one sentence. “Political disinformation succeeds because it follows the structural logic, benefits from the products, and perfects the strategies of the broader digital advertising market,” they write. Reducing the spread of misinformation on Facebook and other platforms will require the company — and regulators — to first acknowledge that fact, they write.
On Monday, I spoke with the co-authors about Facebook, misinformation, and what to do about it. (This interview had been edited for length and clarity.)
Casey Newton: What was the origin of this project? What did you hope to learn?
Dipayan Ghosh: Long story short, over the course of 2016, I had personally seen a lot of things happen in the context of social media and politics that were just nuts. Just crazy. And I felt that something needed to be done about it. Along comes Ben with this initiative at New America, and it’s been a dream since then to join the initiative as a fellow and take this issue on.
What were conversations like at Facebook in the aftermath of the election? To what extent did your colleagues view these issues as serious?
I think the company is thinking about it. There’s no one person who can speak for the entire company. Not even Mark Zuckerberg would claim to. And of course, everyone has different opinions on the issues. What was good about [the revelations] was, in my view, the company was having open discussions with the public, as well as internal deliberations that were very thoughtful around how to tackle these issues. It didn’t want to cut into its profit model, but it really wanted to figure out the right approach. It was an existential issue for the company. It knew that it needed to be thoughtful. So internally I would say that people were seriously thinking about all of these issues, and trying to figure out what was best in their own mind, and in their own work. But also I think the company at large was engaging really broadly in trying to listen to a lot of ideas.
Now, I don’t think those early efforts were enough. But that’s not to say they’re not thinking seriously about all these issues and working hard.
Newton: Did the company state explicitly that it didn’t want to cut into its profit model? Or how did you get that impression?
It’s not something that would be stated. But it makes sense — at the end of the day, it’s a for-profit company, and doesn’t want to cannibalize its own revenue streams. I think what suggested that to me was my understanding of how the company worked over time, which is that in large part it’s the advertising and product mangement teams that really define the product more than anyone else.
In your report, you write: “They use the same technologies to influence people —reaching a share of the national market with targeted messages in ways that were inconceivable in any prior media form. But if the market continues to align the interests of the attention economy with the purposes of political disinformation, we will struggle to overcome it.” How do we break that alignment?
Ben Scott: Part of the reason we did this research, and wrote that paper, is to show that there’s no easy way to do this. There’s no tweak to the algorithm, there’s no simple change in the public-facing product that’s going to address this. Because it’s deep in the business model and the operating logic of the platform.
We looked at a variety of different ways to work at reducing the harm. I don’t think you’re ever going to stop disinformation from flowing on the internet. But disinformation in the media system is not a new phenomenon. What we’re dealing with now is a distributed media system that has the ability to target [political messages] with great accuracy. That’s what’s different than, say, Cold War messages lobbed over the Iron Curtain. So how do you go about reducing the harm, recognizing that you’re not going to eliminate it altogether?
One is transparency. The more users know about who’s targeting them, how much money is being spent, and how they’re being targeted, the more likely they are to be skeptical of efforts to influence them. And it will have to be designed with the same level of user experience that makes the product convenient and enjoyable to use.
We also looked at, how do you differentiate promoted content designed to manipulate political viewpoints with promoted content designed to manipulate commercial behavior? The way we look at the problem is from the perspective of data collection. If we’re restrictive in the data that is collected about politically sensitive topics and elections in particular, and we’re constraining the ability of the platform to sell access to that data for the purpose of targeted advertising, we’ve made a significant dent in the problem. But so far, that manner of approaching the problem hasn’t really been in question.
One thing Facebook has said with respect to Russian influence on the election is that national security is the government’s job. Are there potential government interventions you see as helpful in reducing thwarting disinformation campaigns?
Scott: There’s an important distinction to be made between regulations made to address illegal content, and legal content that produces harm. There’s a set of regulatory practices that are not new to the tech industry that have addressed illegal content. Policy debates around the regulation of spam, intellectual property on the internet, and child pornography — those were all cases in which the industry was skeptical that it could execute technologically against the legal requirements. And in the end, they did comply with those rules. I think in the case of illegal content that serves as a disinformation campaign, you’ll see regulation to address it. The more difficult question is, what do you do about legal content that is not a violation of the platform’s terms of service? That’s where transparency and data collection policies come into play. Absent the threat of regulatory intervention, the companies are unlikely to take action.
Your report makes many recommendations. What, in your mind, is the lowest-hanging fruit?
Ghosh: We talk about broad, sweeping changes to our regulatory regime, and I don’t think any of those is necessarily easy to do. But what I think is simpler to do in the nearer term is implementation of some kind of detection system within the industry, maybe even in coordination with government, to build algorithms that can help determine and detect the use of disinformation operations. I think that artificial intelligence can help here. As much as it is a potential threat to political democracy and the integrity of our election system, it can also be used to our advantage.
I think the regulatory changes that need to be addressed are longer-term solutions. But the clearest one, at least to me, is around privacy. Privacy seems to be an obvious one that has clearly been breached in the context of disinformation. And for reasons of political gridlock, we’ve not been able to move on this.
Scott: A third thing is, and a lot of people have talked about this, but I think it is a useful option: give me the option to look at the feeds in my social media accounts as it would appear without any algorithmic intervention. Just the raw data chronologically in my feed.
How would a chronological feed benefit democracy?
What it does is, it shows me what everyone in my social network is saying, and not what the platform thinks I want to see. I think some users would be both entertained and informed by toggling back and forth between those two options and seeing what the differences are.
Which of your recommendations do you think are least likely to be implemented?
Scott: The hardest is figuring out how to address the relationship between data privacy and market power. It is one thing to sell advertisers access to data profiles that help them to target messages at particular groups that are responsive to particular messages. It’s another thing when you have 80-plus percent recent market share and the scale of hundreds of millions of users in any given market. How do you deal with that kind of control over social information? And it’s application for both political and commercial purposes. We don’t have competition or antitrust policies that are built for that.
Even the Europeans, who are much more aggressive, have not put forward a theory about how to build policy to respond to the need.
Have you gotten get any feedback from Facebook about your recommendations?
Scott: We have. They did not publicly or privately — at least with me — attempt to refute the main arguments in the paper.
Ghosh: What I’ve mostly heard or taken away is that they appreciate that we’re taking a serious and thoughtful approach.
One question hanging over all this is whether social networks good or bad for democracy. Do you have a view on that?
Ghosh: My personal perspective is absolutely yes. They are a net positive. My feeling about the industry and about social media is that it’s a connector. It brings access to people — not just access to social media, but access to the internet, to people in all corners of the world. I think on the whole it is a positive. And these are fundamental flaws that just need to be addressed.
What’s next for you two?
Ghosh: Now that we’ve done this market analysis, how do we think about the next steps of these regulatory regimes? What needs to be done from a policy angle? To make them better for people, to work better for the internet as new media subsumes traditional internet. We have a regulatory system that has been designed over decades and does not address the internet. I think our next step is to think more directly about that, and write about what steps can be taken tactically and politically to make that change happen.