Skip to main content

Nick Clegg doesn’t think Facebook is polarizing

Facebook’s VP of global affairs on the platform’s new changes, with Casey Newton

Share this story

Photo Illustration by Grayson Blackmon / The Verge

There is no shortage of criticisms that get leveled at Facebook: it’s spreading misinformation and hate speech, it’s too polarizing, it’s responsible for fraying the very fabric of society. The list goes on.

This morning, Facebook’s VP of global affairs, Nick Clegg, published a lengthy Medium post addressing some of these criticisms and unveiled some changes the company is making to give users more control over their experience. Specifically, the company is going to allow Facebook users to customize their feeds and how the algorithm presents content from other Facebook users to them. “People should be able to better understand how the ranking algorithms work and why they make particular decisions, and they should have more control over the content that is shown to them,” Clegg writes. “You should be able to talk back to the algorithm and consciously adjust or ignore the predictions it makes—to alter your personal algorithm in the cold light of day, through breathing spaces built into the design of the platform.”

There’s a lot to discuss there. And to help us unpack the post, Clegg sat down with Platformer editor and Verge contributing editor Casey Newton yesterday for a special episode of Decoder.

In particular, Clegg does not think Facebook is designed to reward provocative content, which is a new rebuttal to the company’s critics (and likely a surprise to anyone who’s paid attention to their Facebook feeds). “The reality is, it’s not in Facebook’s interest – financially or reputationally – to continually turn up the temperature and push users towards ever more extreme content,” Clegg writes in his post. “Bear in mind, the vast majority of Facebook’s revenue is from advertising. Advertisers don’t want their brands and products displayed next to extreme or hateful content – a point that many made explicitly last summer during a high-profile boycott by a number of household-name brands.”

Fundamentally, Clegg’s argument is that the Facebook backlash isn’t rooted in fact or science, and that if it gets carried away, we’re never going to get to the better version of the internet that a lot of us want, and he’s trying to reset the debates on Facebook’s terms.

We’ll leave it to you to decide how successful he is.

Okay, Casey Newton with Nick Clegg, VP of global affairs at Facebook. Here we go.

This transcript has been lightly edited for clarity.

Casey Newton: Welcome to Decoder.

Nick Clegg: It’s great to be here. Great to be with you.

So you’ve just published a 5,000-word essay that pushes back on a lot of recent criticism of Facebook and makes some news about what the company is doing in response. Tell me about the origin of this piece.

Well, I’m not a technologist or an engineer by background. I’m a sort of refugee from politics, from 20 years in politics, and yet I’ve been at Facebook now for a couple of years. And so as a relative newcomer, I’ve been observing the debate — particularly, but not only in the US — about the role of social media in society, in some of the most complex issues we have in politics, in culture, and obviously the social dynamics going on right now.

And this was certainly not designed to simply act as a receptacle for all the counter-arguments. What I’m trying to do — and some people may believe I’ve succeeded, others no doubt will say that I haven’t — but I’ve tried to engage sincerely and as thoughtfully as I can with many of the criticisms. Precisely because I sometimes worry that the dialogue between Silicon Valley and the critics of Silicon Valley is becoming a bit of a shouting match where people are just yelling alternative views of the same facts and not really listening to each other. It’s a genuine attempt to try and grapple with them.

And the more I thought about this, I thought one of the things that was underlying a lot of the legitimate concern about social media in particular, is, given that social media’s early promise was all about empowerment, empowering individuals to have a voice they didn’t have before, empowering communities to mobilize in a way that they didn’t have before, empowering people to connect with families and friends in a way they hadn’t before, and in Facebook’s case, doing also all that on such a huge scale. And everybody, because it’s paid for by ads, doing so for free. I thought one of the really key issues to look into was to write, who really is in charge? Is the user in charge, or is the system in charge?

And there’s a lot of legitimate concern these days that somehow people’s choices are not sovereign. They’re not in the driving seat. That somehow in the boiler room, the algorithmic boiler room, all sorts of choices are being made, and some of them might be nefarious choices. They’re not transparent enough. And so that’s why I really honed in on this piece, how can we lift the veil a bit, open the bonnet, look into the boiler room, be more transparent about how the systems work, and crucially, push forward with a renewed attempt at giving users meaningful controls. Where in effect they can, in some instances, override the algorithm, override the ranking systems that are made available to them and make their own choices.

And this gets into sort of what is the news that you are making here. I think to your point, there has been a huge amount of concern about what algorithms are doing in the background.

Most of us, we’re not math majors or computer science majors. And so there is some sort of fear and uncertainty about what’s going on in the background. One of the things that Facebook is now doing is giving us some new ways to change up what we see in the News Feed. So what are some of these new controls?

So some of the controls are old. We’ve had them for a while, but we’re just going to make them a lot more prominent. So for instance, you could always switch to a chronological feed. But candidly, it wasn’t easy for people to find. So we’re now going to have a feed filter bar. When you scroll to the top of your feed, it’ll be there. It’ll always be there, and you can toggle between the feed as it currently exists, to have it chronologically ordered, or, crucially — and this is new — so that you can create your own new feed of favorites; favorite groups, friends, posts, and so on. And you’ll be able to curate that, if you like, for yourself and toggle between those three — the feed as it is, the chronological feed, and your new favorites feed — in a much, much more effortless way.

It’ll be much more visible. It’ll be visible there when you scroll to the top of your feed. There are other new controls as well, which I’m announcing this week. You’ll be able to curate with much greater granularity than before who can comment on your posts. And that is something which wasn’t available before. And we’re also going to extend something which has existed for ads, for instance, and for connected content. Namely, “Why am I seeing this?” So you can go to the three dots and you can see, “Why am I seeing this ad?” We’re now going to extend that to suggested content. So when something’s suggested to you, that cooking video, you can go on the three dots, and you can see why you’re seeing that.

So I think, collectively, it’s a start. I’m not going to pretend that those changes in and of themselves will lift all the questions that people have about how social media operates and how they interact with Facebook. But I do feel that they are significant steps in a better direction, putting users more in charge, being more open and transparent about things, and we will follow up with a number of additional steps, greater transparency, greater controls in the months to come.

Is that also a suggestion that maybe this is the beginning of this feed filter bar that you’re introducing, that that might have more filters that come to it over time? Is the idea that users will have more and more control over how the stuff they see is ranked?

Yeah. Look, in an ideal world, you just want to push ever more forcefully in the direction where people can personalize their feeds. And if people want to see more or see less of particular forms of content, from particular pages or groups, there is also, conceptually at least, the possibility of exploring whether people can or can’t, if you like, turn the dial up or down on particular classes of content. That’s exactly the kind of work that we want to do. Now, exactly how granular, exactly which dials apply to which kinds of content, all of that still needs to be filled in. But that is very much the direction we’re going in.

So the conventional wisdom about how the feed works now, I think for a lot of folks, and certainly of the folks who are most critical of Facebook, is that it rewards the most polarizing and outrageous content. And this is something that you really take on in this piece and push back against. 

I suspect if there’s one sentence in your piece that most people will take issue with, it’s when you write, “Facebook’s systems are not designed to reward provocative content.” At the same time, when we look at lists of pages that get the most engagement, it does tend to be pages that seem to be pushing really polarizing content. So how do you reconcile this at Facebook?

Well, firstly, I of course accept that we need to just provide more and more data and more evidence about what is the specific content that is popular on News Feed. And then of course, although Facebook’s critics often talk about sensational content dominating News Feed, of course we want to show, as I think we can, that many of the most popular posts on News Feed are lighthearted. They’re feel-good stories. 

We want to show people that the overwhelming majority of the posts people see on News Feed are about pets, babies, vacations, and similar. Not incendiary topics. In fact, I think on Monday, one of the most popular posts in the US was of a mother bear with three or four baby cubs crossing a road — I saw it myself. It’s lovely. I strongly recommend that you look at it — And I think we can, and we will, do more to substantiate that.

But beyond that, I do think, and I do try to grapple with this as thoroughly as is possible in a 5,000-word piece: Firstly, the signals that are used in the ranking process are far more complex, are far more sophisticated, and have far more checks and balances in it than are implied by this cardboard cutout caricature that somehow we’re just spoon-feeding people incendiary, sensational stuff. 

And I’m happy to go into the details if you like, but thousands of signals are used, literally from the device that you use to the groups that you’re members of and so on. We use survey evidence. We’re using more and more survey evidence. We’ll be doing more of that in the future as well to ask people what they find most meaningful. There’s been a big shift in recent years anyway to reward content that is more meaningful, your connections with your families and friends, rather than stuff that is just crudely engaging — pages from politicians and personalities and celebrities and sports pages and so on.

So that shift has already been underway. But in terms of incentives, this is the bit that maybe we have not been articulate enough on this. Firstly, the people who pay our lunch don’t like the content next to incendiary, unpleasant material. And if you needed any further proof of that, this last summer, a number of major advertisers boycotted Facebook because they felt we weren’t doing enough on hate speech. We’re getting much better at reducing the prevalence of hate speech. The prevalence of hate speech is now down to, what? 0.07, 0.08 percent of content on Facebook. So every 10,000 pieces of content you see, seven or eight might be bad. I wish it was down to zero. I don’t think we’ll ever get it down to zero. So we have a massive incentive to do that.

But also, if you think about it, if you’re building a product which you want to survive for the long term, we want people in 10 years, in 15 years, in 20 years to still be using these products. There’s really no incentive for the company to give people the kind of sugar rush of artificially polarizing content, which might keep them on board for 10 or 20 minutes extra. Now, we want to solve for 10 or 20 years, not for 10 or 20 extra minutes. And so I don’t think our incentives are pointed in the direction that many people assume.

That all being said, it is of course true... any sub-editor [copy editor] of a newspaper will tell you, it’s why tabloids have used striking imagery and cage-rattling language on their front pages since time immemorial. Of course there are emotions of fear, of anger, of jealousy, of rage, which of course provoke emotional responses. They’ve done so in all media, in all time. And so of course emotive content provokes an emotive reaction amongst people. We can’t reprogram human nature, and we don’t want to deny that, which is why our CrowdTangle tool actually elaborates on that and shows how things have been engaged.

But as you know, there is a world of difference between that which is most engaged with — in other words, where comments and shares are most common — and actually the content that most people see. And that’s quite, quite different. Actually, if you look at what most human beings, if you look at eyeballs, rather than comments and shares, you get a quite different picture.

And the final point I would make is, I read, almost on a daily basis, words like “awash”, “are drowning in”, all these euphemisms. Let’s just keep this in perspective. Political content, if you read a lot of political commentary, you’d think that the only thing people go to Facebook for, is not babies, barbecues, and bar mitzvahs, but for politics. In fact, politics is a minority of the content; it’s about 6 percent of the total content on Facebook. So I guess my plea is: yes, we on Facebook need to be more open about all these different subtleties, but equally I think people who comment on it should keep some sense of perspective as well about what normal human beings in the normal world actually do when they go on Facebook.

So there’s a tension in that that I want to ask about. In your essay and in this discussion, you’ve talked about your belief, Facebook’s belief, that people should have a lot of agency over what they see. They should get to choose who they want to hear from, which articles they want to read, which friends they want to hear from. 

At the same time, Facebook has taken steps to reduce the default amount of political content in the feed, to stop recommending political and civic groups, and really sort of exercising what some might call editorial judgment over at least what the defaults are. So are those two ideas at odds? Is that just a tension that you have to manage?

I think it is a tension. And as I say in the piece, waxing lyrical about promoting individual agency, that’s the easy bit. That’s mother and apple pie. I actually think also identifying harmful content and keeping it off the internet, that’s challenging, but we’re getting better at it. It’s doable. 

The bit that is really, really tough, because it’s just the subject of so many differences of opinion, is what is the collective good? What constitutes the collective good? And who should determine that? And how should it be reflected on social media, which in many ways has become the public square for a lot of these things? And of course, that’s particularly acute, or has been in recent years in the US, where there just isn’t a ready-made, oven-baked consensus on what constitutes good content.

What for one school of thought is bad and unacceptable content is, for another school of thought, the right to free expression. And we’re caught in the middle of that. We have heavy responsibilities to try and draw the lines in the right place. We do that as deliberately and deliberatively as we possibly can with academics and experts. We publish all of that quite openly. We enshrine that in our community standards, but at the end of the day, I do think — it’s something which I know Mark Zuckerberg feels very strongly himself — it’s not great to have private companies basically adjudicating on what are societal, and in the end, quintessentially political judgments. Basically, you’re making judgments about where the collective good lies, and whilst it’s difficult, I do think we need to move gradually. And when I say “we,” I mean both the private sector, and the politicians and the legislators who are democratically accountable to the people.

I do think we all sort of collectively need to move beyond admiring at this problem or shouting at this problem and trying to promulgate rules which enjoy wider consensus. Because in the end, I don’t think anyone is going to accept — and why should they — is ever going to accept the rules as constituted by a private company itself.

So one of the ways that Facebook would talk about this three or four years ago was that there was a range of subjects about which it would say “Well, we don’t want to be arbiters of truth. That is not for Facebook to decide,” for some of the reasons that you just mentioned.

But in recent years, the company has introduced what it calls information centers. It’s offered really high-quality information about elections, COVID-19, I think climate change is the most recent one, really using its editorial discretion as a private company to say we think this information is really good and you should see it. So, I’m curious, what explains that change? Is the company getting more comfortable taking what look to me like editorial stances?

The only distinction I’d make, Casey, is it’s not so much that Facebook is using its own editorial judgment. What we are recognizing is that Facebook just has exceptional reach. It is a means by which you can reach an exceptionally large number of people in a very, very direct way. And so we want to make that “means,” if you like, available to those who do legitimately promulgate, and if you like, editorialize, authoritative information. So whether it’s the CDC or the WHO on COVID, whether it is the UN-backed panel of international scientists on climate science, whether it is the public authorities running our elections.

So it’s not that we are seeking to supplement their role as the source of editorially authoritative information. What we’re making ourselves available for is then to serve as the means by which that can reach a lot of people in a very direct way.

And I, yeah, I guess what has clearly happened is that, I mean, the pandemic is the most dramatic example, but I think the US elections were another one, just given quite how polarized the politics leading up to the US elections were last November. I just think the company is recognizing that there are these societal issues, which are all-encompassing, and where there are institutions and organizations and experts and authorities who have pretty unimpeachable credibility. 

And we would like to connect the two. We’d like to connect people. People can choose to listen to them. As you know, on the COVID information hub, I think over 2 billion people accessed it. And I think over 600 million people double-clicked on it to find more information on COVID. And on the voter-information habit, it appears to have helped over 4 1/2 million Americans to register to vote who otherwise wouldn’t.

So I would draw this distinction between the words you’ve used of “editorializing,” which I think is  — we’re not trying to replace the CDC and the WHO. We’re not trying to replace the climate-change scientists who know their business in the way that Facebook does not.

Right. The thing that I appreciate about these moves is, in all of the discussion around misinformation, I think there is an idea that if social networks would simply remove all of the bad posts, we’d be fine. But I think in reality, you have to show people good posts too. So to me, like those kinds of information centers are a step in that direction.

Yeah. And look, it’s two sides of the same coin. Of course you need to bear down on misinformation, but you’re never going to create a healthy information ecosystem on really difficult issues like public health and the pandemic, simply by playing whack-a-mole on misinformation.

I would argue actually in many respects, what you proactively do to empower people with the right information is as important if not more important than that which you remove or demote. And candidly, on misinformation, the pandemic is a classic example. We have, right now, we have governments saying that they do or don’t like that vaccine. I just read overnight that Canada has paused the use of the AstraZeneca vaccine for people under the age of 55. And so you go, “wow.” And then people pass comment on the Chinese vaccine and the Sputnik vaccine.

And then people want to of course share on social media their personal experiences. “My arm is sore.” “I got a headache” or whatever. We cannot, nor should we, cleanse the internet of legitimate debate. And in fact, people expressing their opinions about the vaccines is a really important iterative process by which people become comfortable with taking the vaccine themselves. 

You don’t tackle vaccine hesitancy by simply trying to eliminate all debate. You’re quite right that these things go hand in glove. It’s both the information that you don’t allow or you demote, but it’s also, crucially, the information that you promote in a more proactive way so that people have access to authoritative information in hopefully the most credible form that it can be delivered to them.

So let’s turn to another really important subject that you write about in this piece, which is polarization. I think a lot of folks I talk to take for granted the idea that social networks accelerate what researchers call negative affective polarization, which is just basically the degree to which one group dislikes another.

But as you point out in the piece, and I have done some writing on, the research here, while limited, is mixed, but I do think it strongly suggests that polarization at least has many causes, and some of those do predate the existence of social networks.

At the same time, I think all of us have had the experience of getting on Facebook and finding ourselves in a fight over politics or observing one among friends and family. And we read the comments and everyone digs in their heels and it ends when they all unfriend each other or block each other. And so what I want to know is do you feel that those moments just aren’t collectively as powerful as they might feel to us, individually? Or is there something else going on that explains Facebook’s case that it is not a polarizing force?

Because as you quite rightly said, Casey, this is now such a, it’s like almost a given. I just hear people literally just make this throwaway remark: “Well, of course social media is the principal reason for polarization.” It’s just become this settled, sediment layer in the narratives around— so believe you me, I’m trying to tread really carefully here, because when you interrupt people’s narratives, replete with gusto, folk don’t like it. 

And that’s why I’ve very deliberately in the piece not cited anything that Facebook itself has generated. Of course, we do research. We commission research. This is all third-party research. And I choose my words very carefully, I say that the results of that research, that independent academic research, is mixed. It really is very mixed.

And to answer your question, I think the reason perhaps why there is this dissonance now between the academic and independent research — which really doesn’t suggest that social media is the primary driver of polarization after all — and the assumptions, I think there’s a number of reasons. It’s partly one of geography. I mean, candidly, a lot of the debate is generated by people using social media amongst the coastal policy and media elites in the US.

But let’s remember, nine out of 10 Facebook users are outside the US. And they have a completely different experience. They live in a completely different world. And the Pew study in 2019, which is really worth looking at for those who are interested, looked at people using social media in a number of countries, not the least a number of countries in the developing world. 

There they found overwhelming evidence for them — for millions, perhaps billions of people in those countries, who were not living through this peculiar political time in the US — they actually were using social media to experience people of different communities, different countries, different religions, different viewpoints, on a really significant scale.

So I think there’s geography. I think there’s time. We just need to remember— you just need to look at the evidence, for which there is considerable evidence, and Stanford research published last year looked at that. They looked at nine countries over 40 years, and found that in many of those countries, polarization preceded the advent of social media. And in many of the countries, polarization was flat or actually declined even as the use of social media increased. So I think there’s an issue of geography. There’s an issue of time, where we’re kind of, candidly, losing a little bit of sense of perspective. 

And then I think there is just an issue of perhaps looking at social media in a way that is divorced from the other parts of the media ecosystem. And to my mind, at least, the study that was published just a few months ago by the Berkman Klein Center at Harvard University last year. And they looked specifically at election-related disinformation to do with mail-in ballots, because there was all this stuff circulating about mail-in ballots where a lot of folk were trying to sort of tarnish the integrity of using mail-in ballots.

And they showed pretty comprehensively that that was primarily driven by elite and mass media, not least cable news, and that social media only played a secondary role. So I think we need perspective on geography, on time, and on outlet. And I think when you do that … I don’t want to swing the other way. I don’t want to somehow pretend that social media does not play a part in all of this. Of course it does, but I do hope I can make a contribution and say, “Look, if we step back a bit.” We’re starting to reduce almost, I think, simplistically some quite complex forces that are driving cultural, socioeconomic, and political polarization in our society to just one form of communication.

Yeah. I mean, I think that that’s fair. I’ll also tell you my fear though, which is that the best data about this subject, or a lot of it, is at Facebook. And I think there’s a good question about whether Facebook even really has the incentive to dig too deeply into this. 

We know that when it has shared data with researchers in the past, it’s caused privacy issues. Cambridge Analytica essentially began that way, and yet this feels really, really important to me. So I’m just wondering, what are the internal discussions about research? Does the company feel like it maybe owes us more on that front, and is that something it’s prepared to take on?

Yes, it does. I personally feel really strongly about this. I mean, look, of course, Cambridge Analytica, it’s often forgotten, was started by an academic legitimately accessing data and then illegitimately flogging that data on to Cambridge Analytica. And so of course that rocked this company right down to its foundations. And so of course, that has led to a slightly rocky path in terms of creating a channel by which Facebook can provide researchers with data. 

But I strongly agree with you, Casey. These are issues which are not only important to Facebook. They’re societally important. I just don’t think we’re going to make progress unless we have more data, more research independently vetted so that we can have a kind of mature and evidence-based debate. Look, I do think we’re getting a lot better. The time I’ve been here, I really believe we’re starting to shift the dial. Last year, we provided funding to, I think, over 40 academic institutions around the world looking at misinformation and polarization.

We’ve helped launch this very significant research project into the use of social media in the run-up to the US elections last year. Hopefully, those researchers will start providing the fruits of their research during, I think, the summer of this year. We’ve made available to them unprecedented amounts of data. There are always going to be pinch points where we feel that researchers are in effect scraping data, where we have to take action. 

Candidly, we are legally and duty-bound, not least under our FTC order, to do so. I hope we can handle those instances in a kind of grown-up way, whilst at the same time continuing to provide data in the way that you describe. I would really hope, Casey, that if you and I were to have this conversation in a year or so, you and I would be able to point to data which emanates from Facebook but has been freely and independently analyzed by academics in a way that has not been the case in recent years.

Let’s do that. I’m going to mark my calendar so that we can do that. Another point in your essay that I think is worth talking about is, you acknowledge something that I also think gets lost in some of the discussions that we have on these subjects, which is that the internet is bigger than four companies. 

And in the context of writing about bad content elsewhere, you write, “Consider ... the presence of bad and polarizing content on private messaging apps — iMessage, Signal, Telegram, WhatsApp — used by billions of people around the world.” Facebook, of course, owns WhatsApp, but not those others. Where do you think that sort of private messaging and maybe encrypted messaging fits into some of these conversations that we’re having?

Well, I mean the point I was making was that for those who believe — and of course I’m caricaturing now, just for brevity’s sake — but for those who will say, “It’s all the algorithm’s fault. It’s all the system’s fault, and us human beings are just like puppets on a string. We’re being manipulated,” I just thought it was worth, in passing, pointing to the fact that actually, billions of people use messaging apps as their primary form of communication, which is algorithm-free. 

And yet, it’s still a route, a conduit by which unpleasant, polarizing, hateful content is spread. We’ve seen this on our own services. In very general terms, you’re seeing this big shift from people congregating, communicating, connecting with each other in open spaces, Instagram and Facebook, and doing so increasingly in more intimate settings, messaging apps.

I think the vast majority, well over 90 percent of WhatsApp messages — I need to check this —  but are still one-to-one messages. And Mark Zuckerberg has talked in the past, sort of comparing them to the public square on the one hand, and the kind of living room on the other. I do think people are increasingly expecting, as sort of just table stakes, just as default entitlement, to be able to communicate with others in a private and secure way through messaging. 

And I think that that does then pose issues with regulators and policy makers. Yes, partly, perhaps, about the content in terms of standards of speech, hateful speech and so on. But actually, the more harder-edged side of that is the debate around law enforcement and just the impact that enforcement has on traditional forms of content-driven law enforcement.

And that’s something that companies like Facebook work tirelessly with law enforcement, to demonstrate that there are ways in which we can still enhance safety, even if we don’t — and nobody else has access to message content. You can see this debate in India. You can see it in the UK. The FBI director recently talked about this publicly. 

I think that is a running theme, and we need to play a responsible role in striking the right balance. I mean, it’s an age-old debate, isn’t it? This balance between privacy and security. From the days that letters were being steamed open, this has always been a balance that we’ve had to try and strike in the right way in democratic societies.

I want to go back to a point you raised earlier about one of the biggest debates that I think we’re having around social networks, which is who deserves to have a platform. And when somebody is removed from the platform, who gets to decide? You write, “Should a private company be intervening to shape the ideas that flow across its systems, above and beyond the prevention of serious harms?” 

To my mind, the answer is clearly yes. I think that companies have that right. I don’t even know that companies would work if they were not able to go beyond simply removing illegal content. So I wanted to press you a bit on that point. The right to make that call, is that something that Facebook would really want to give up?

No. So I agree with you, of course, it’s part and parcel, isn’t it, of running a service that people voluntarily use. We are entitled to say, “You use our services, but if you do, there are certain rules of the game. And if you transgress them, if you do that most egregiously, you won’t be welcome in using them again.” 

I think, candidly, where it gets, just as a matter of first principle, a lot more tricky, is when we as a private company make those adjudications about people who are democratically elected leaders or leaders of governments. I mean, obviously, the indefinite suspension of Donald Trump’s account is the most obvious example of that, but just over the last few days, we’ve suspended [Venezuelan] President [Nicolas] Maduro’s use of his page for 30 days.

And that is really tricky. In response to the Trump suspension, when you have everyone from the president of Mexico to Bernie Sanders saying that they’re worried about the precedent that sets, I kind of think it’s reasonable for all of us to acknowledge that we’re now entering into pretty tricky territory.  

Because a lot of people who said when we took that action against Trump, “Oh, you should’ve done this years ago,” as if it was just an easy, straightforward decision to just suspend the elected president of the United States. You know, the most powerful democracy on the surface of the planet. It is not easy. That is not an easy decision. And I would be very, very worried if companies — and I say this as an ex-politician as much as now an executive at Facebook — I would be exceptionally worried if private companies in Silicon Valley just took a trigger-happy approach to that kind of thing, because that seems to me to be really blurring the boundaries between democratic principles and private prerogatives. And so that’s, of course, one of the reasons why we have referred that case to the Oversight Board

I think they’re busy looking at it now. In fact, I think they have to, under the rules of the Oversight Board. They have to come out with pronouncements on whether they think we took the right decision and how we should move forward. Because we’ve asked them for wider guidance beyond the Trump case, on how we should treat these issues and what happens when you have political leaders and we want to take action against them. I hope they’re going to pronounce fairly shortly, but we’re doing that because we realize that this is a power we can wield, but it’s a power which must and should and can only be wielded in a way that is proportionate, careful, thoughtful, and accountable.

Yeah. And I’ve been somebody who has been supportive of the Oversight Board, because the status quo before the Oversight Board was that, ultimately, Mark Zuckerberg would just make every hard call, which is not a role that I think that he wanted or that many people thought was ideal. 

At the same time, there is now this other board, which is not super accountable to anybody, that is going to be making this very consequential decision. I actually think we’re going to see, in the next few days they’re probably going to issue a ruling on Trump, is what I have been led to understand.

There you go, they’re so independent, Casey, that you knew that before I did. I’m just going to scribble a quick note to my team.

Yeah. So look out for that. But I just wonder, so this board has been stocked with people who are big speech advocates. They have overturned most of the decisions that you’ve referred to them so far. There’s a lot of speculation that they are going to overturn this one. What will it mean for Facebook if the board restores Donald Trump?

Well, technically and narrowly, if they say, “Facebook, thou shalt restore Donald Trump,” then that is what we will do, because we have to, because we’re duty-bound to do so. We’ve been very, very clear at the outset. The Oversight Board is not only independent, but its content-specific adjudications are binding on us. Beyond that, as I said, and look, I know, of course, the decision they make about Trump will grab all, quite rightly, the headlines. 

But I actually hope that actually their wider guidance on what we should do going forward in analogous similar cases will be as, if not more, significant, because we’re trying to grapple with where we should intrude in what are otherwise quintessentially political choices. And we’re anxious for their guidance, and on that guidance, that’s guidance, those will be recommendations, and we will then cogitate on that guidance ourselves and then provide our own response over a period of time.

But in terms of the specific up or down decision, that is something, assuming that they’re going to be clear one way or another, where we will have to abide by their decision. And it doesn’t mean, of course, that our rules — I mean, our rules in terms of violations, strikes, and all the rest of it, they will remain in place. But I mean, our hands quite deliberately and explicitly are tied, as far as specific individual content decisions that they make are concerned.

Let me ask you one final question. You write about how Facebook is rethinking about how it can use ranking changes, some of the other tools that we’ve discussed, to ensure that it has a positive impact on society. A lot of folks I know, I think, have given up on the idea that Facebook can have a positive impact on society. 

What I want to know is, what things can Facebook do that would make you believe, that would make Facebook believe, or that would make the world believe that it’s having a positive impact on society? How would you measure that? How would you know it if you saw it?

Well, look, I don’t want to sound glib, but I kind of think we see it all the time. I mean, why would 3 billion people in the world freely choose to use these services if it was bad for them? I mean, of course, there are bad actors who try to use any form of communication and have done so from time immemorial. From radio to television, to letters, to emails, and we need to go after them, we need to kick them off where we can, we need to make our systems better, and so on. 

But it’s the point I made earlier, I just do hope, even as we grapple with the minority of people trying to propagate bad content, we remember that the vast, vast, vast majority of people use Facebook for positive, sometimes playful, innocent, meaningful, joyful reasons, or downright useful reasons.

Why are millions and millions of small businesses using Facebook to reach customers in a way that they never could before? Isn’t it remarkable? I think it’s a great democratizing thing, economically speaking, that small businesses now have access to tools on Facebook to reach their customers in a way that only big corporations with big fat marketing budgets used to in the past. 

I think it’s wonderful that whether you’re a student in Guatemala, or a sheep shearer in the outback in Australia, or a fancy, well-paid lawyer in New York, you can use Instagram and Facebook on exactly the same basis. I think that’s an extraordinarily equitable thing. So for me, and I realize this might be an unfashionable thing to say to some, it is to me self-evident that Facebook is being used in really positive, creative, and enriching ways.

What I equally, however, acknowledge is that not only are there lots of legitimate concerns, we need to do more to lift the veil on how the system works, to give people more controls. And what I set out in this piece is just a start on a renewed journey. Where Facebook really wants to put people more in charge so they can in effect create and curate their own News Feed. 

And we will make a number of announcements in the coming months; greater transparency on how we use survey evidence, how we demote, what signals that we use, what more controls over content we can give to people. And, look, I hope over time, that as this pendulum has swung so dramatically from the tech utopianism and the tech euphoria of the past to what is now, in some cases at least, an almost hysterical tech pessimism. And neither extremes, I think, are right. I mean, it’s always somewhere in the middle.

I hope that that pendulum can come to rest, and that one of the ingredients in creating a more sustainable way forward is that users feel that they’re more in charge, and that’s the spirit in which I’ve written this piece.

Decoder with Nilay Patel /

A new podcast from The Verge about big ideas and other problems.

Subscribe now!