In a wide-ranging interview on The Vergecast this week, Microsoft president and chief legal officer Brad Smith expanded on why the company nearly shut down Gab.ai, the “free-speech” absolutist platform that’s become an alt-right favorite.
Earlier this month, Microsoft sent a notice to Gab threatening to end the company’s Azure cloud service if it did not remove two anti-Semitic hate speech posts within 48 hours. The notice, which Gab said would cause the social network to “go down for weeks/months,” sent the social network’s operators into a frenzy. But Smith said Microsoft headquarters in Redmond, Washington, was asleep when the notice was sent.
“A relatively straightforward judgment call”
“Literally in that case, in all candor, somebody in our Azure support area in India had received an email from somebody who is in the consulting business who had heard from another company, expressing concerns about some content on Gab.ai,” Smith said.
“While we were sleeping on the West Coast of the United States, an employee in India had sort of turned out an email that went to Gab that said, ‘We’ve spotted some content, and under our policy, you have to address it in 48 hours or you risk being cut off.’”
Smith said executives reviewed the decision after being contacted by journalists, including The Verge. But ultimately, he said, there was little to review: it was “a relatively straightforward judgment call because the content was so extreme.”
The posts, which advocated for genocidal violence against Jewish people, were removed by the poster before Microsoft’s takedown deadline. “Whoever made that call while we were sleeping made the right call,” Smith said.
“Our goal is to develop a set of principles.”
While the high-profile action against Gab generated a news cycle, Smith said his review of the moderation decision was an exception to the rule. “Our goal is to develop a set of principles. And so at a high-level, we work to understand these issues, develop a principled approach, stress-test the principles somewhat, and then empower people to apply them,” he said.
The exchange between Brad Smith and Nilay Patel follows:
Nilay Patel: So, earlier this month, Microsoft told Gab, which is sort of an alt-right Twitter clone, that they would be dropped from Azure if they didn’t take action on anti-Semitic posts within two days. Gab is still on Azure, yeah?
Brad Smith: Yes, it is.
Patel: And I think it was because a user deleted those posts. This is ultimately the same. Azure is a huge service that you run, one of the biggest cloud computing providers out there, what prompted the warning?
Smith: Well, it’s also a reflection of how things arise as issues in unexpected ways. Literally in that case, in all candor, somebody in our Azure support area in India had received an email from somebody who is in the consulting business who had heard from another company, expressing concerns about some content on Gab.AI. While we were sleeping on the West Coast of the United States, an employee in India had sort of turned out an email that went to Gab that said, “We’ve spotted some content, and under our policy, you have to address it in 48 hours or you risk being cut off.” We came to work the next morning in Redmond, Washington, blissfully ignorant of all of this, and then we heard not from Gab.AI, but from The Verge and The Washington Post. And they said, “We’ve heard from Gab, and we have this in front of us, and we’re going to run a story in two hours. Can you tell us what your decision is?”
Patel: We do do that. I’m aware it’s quite annoying. But we do it all the time.
Smith: And so I was very happy that I had come back from vacation the day before because I happened to be in the office, and somebody walked down the hall and said, “Gee, we’ve gotta make a judgment call here, and now it’s an hour and 25 minutes.” You know, you do what you do in life. It’s like, “Okay, let’s look at this. What is this service? What is this content? What is it that was raised and flagged and what was said to Gab?” And in that instance, it was a relatively straightforward judgment call because the content was so extreme.
It was advocating genocide through violence against all people who are Jewish. And, you know, we looked at it and said, “You know, this isn’t the kind of speech that is protected in the United States. It’s not the kind of speech that’s protected under international human rights standards. It’s not the kind of speech that is in conformity with Azure’s own policies, and therefore we’re going to go to Gab and say, “You know, that’s right. Whoever made that call while we were sleeping made the right call. And that specific content really does need to be taken down. And if you don’t want to take it down, that’s your call, but we won’t let you continue to use Azure.” And we did take pains to say we’ll give you, in all likelihood, more than 48 hours to move to another service because we did appreciate that they probably wouldn’t be able to move to another service in 48 hours, but it called to question. And, as you know, and from what I understand, Gab went back to the individual that had posted that content, and that individual voluntarily, if you will, took down that content.
Patel: How often do calls like that escalate to you? You obviously have a huge support network across the world. Does that happen to you all the time, or does other Azure stuff come down more frequently and it just doesn’t hit your desk?
Smith: It’s the exception that somebody like I get involved in these things. Our goal is to develop a set of principles. And so at a high-level, we work to understand these issues, develop a principled approach, stress-test the principles somewhat, and then empower people to apply them. And, you know, if the principles break down or if there’s a new question, it’s likely they come back up. But mostly, you’ve got to run a global company at scale, and it is not by having executives make these individual calls every day.