A lot of this attention has focused on Twitter’s moderation policies, particularly whether he’d let people like former President Donald Trump back on the platform. But some of Twitter’s most consistent contributions in that field don’t show up on the platform itself, instead taking place in ongoing court battles over privacy, anonymity, and liability. And just before Musk made his latest offer, Twitter raised the stakes on one of those battles dramatically.
I really don’t know what Elon plans to do with Twitter’s on-site moderation policy. On the one hand, he hates banning people; he’s suggested he would bring back Trump and take a lax hand on things like disinformation. On the other hand, he wants to make Twitter profitable and also maybe more like WeChat, which implies sweeping a lot of objectionable (or just plain irritating) content out of sight. There’s a proposal from Axel Springer CEO Mathias Döpfner in his text logs that just reads, “Step 1.) Solve Free Speech.” For all Musk’s lofty talk about a modern town square, his most common gripes are spammers and bots, and cracking down on them would mean less, not more, speech on Twitter.
For years, though, Twitter has been one of the internet companies that’s most consistently argued against legal crackdowns that would make people less likely to speak up online. It’s taken on a role that Musk could easily vacate, particularly with his businesses’ many government entanglements. And that new risk comes just as Twitter prepares for a Supreme Court showdown that could impact people across the entire web.
Jack Dorsey defended a foundational internet law while his Google and Facebook counterparts equivocated
As Mike Masnick of Techdirt noted after Musk’s initial acquisition bid, Twitter has repeatedly fought to avoid turning over users’ personal information to law enforcement, even as other web platforms have folded. In 2020, former Twitter CEO Jack Dorsey was the only “Big Tech” leader to make a full-throated defense of CDA Section 230 in front of Congress, warning that the law was a bedrock of online communication — while Alphabet’s Sundar Pichai timidly urged lawmakers to be careful and Facebook (now Meta) CEO Mark Zuckerberg threw it under the bus completely.
Twitter obviously doesn’t resist all law enforcement requests. It’s obeyed rules against things like hate speech in European countries, blocking Nazis and other far-right accounts within those markets. It was doing so even in the early ’10s, during its attempt to be the “free speech wing of the free speech party.” More recently, it’s consulted with US health agencies about removing COVID-19 disinformation, although the actual content removal was voluntary. And its defenses are, to some degree, self-interested — most companies don’t want to be regulated or give up data!
But right now, no matter what its motivations, Twitter is embroiled in a particularly consequential legal dispute. On Monday, the Supreme Court took up a pair of cases that will weigh sites’ liability for hosting illegal content. One is a long-running case against Google, alleging that its YouTube recommendation algorithms aren’t covered by Section 230. The other is a suit against Twitter, claiming that it violated the Anti-Terrorism Act by failing to remove enough extremist content from the site. (Notably, while Google will be defending itself against an appeal, Twitter petitioned the Supreme Court proactively in case Google lost.)
The case Twitter is fighting will affect more than tech giants
These cases don’t just affect tech giants. Google’s case could transform the way we think about online legal protections. While it’s been framed around the company allegedly pushing terrorist propaganda with a specific kind of recommendation system, the court could determine that “algorithms” refer to more general searching and sorting systems — and a ruling would likely cover every app and website, regardless of size.
The Twitter decision is narrower, dealing specifically with sanctions laws. But the Supreme Court will decide just how aggressively services must work to purge illegal content — whether on Twitter or anywhere else. In the words of Twitter’s lawyers, are sites “liable for aiding and abetting an act of international terrorism because they provided generic, widely available services to billions of users who allegedly included some supporters of ISIS?” Those services are social networks in Twitter’s case, but the answer could plausibly apply to nearly any tool that people put online.
Musk has declared himself mostly unconcerned with legal censorship, saying that democratic governments should choose what they consider lawful. But the interpretation of those laws is up to courts. If he decides Twitter’s case isn’t worth fighting there, it could lead to a crackdown on legal material, too, since companies would have an incentive to remove anything that trips too many red flags. (On a purely mercenary level, this might work out poorly for some of Musk’s right-leaning fans — there’s a growing push to class European far-right groups as terrorists, and that could spill over into a crackdown of anything that smacks of supporting their cause.) And this almost certainly won’t be the last dispute; among other things, Texas and Florida just set up a huge fight over banning social media moderation.
It’s the kind of fight that someone invested in online speech might relish — and in the coming months, we might find out if Musk actually fits that bill.