Skip to main content

Filed under:

The Supreme Court hears arguments for two cases that could reshape the future of the internet

Share this story

The Supreme Court is hearing oral arguments for Gonzalez v. Google, a potentially landmark reinterpretation of Section 230 of the Communications Decency Act, and Twitter v. Taamneh, a case about anti-terrorist sanctions. The cases could shape the future of the internet, as plaintiffs seek to hold Google and other companies liable for recommending terrorist content on platforms like YouTube. But a major narrowing of Section 230 could affect everyone from tech giants to Wikipedia editors and Reddit mods.

  • Supreme Court rules against reexamining Section 230

    The YouTube logo against a black background with red X marks.
    Illustration by Alex Castro / The Verge

    The Supreme Court has declined to consider reinterpreting foundational internet law Section 230, saying it wasn’t necessary for deciding the terrorism-related case Gonzalez v. Google. The ruling came alongside a separate but related ruling in Twitter v. Taamneh, where the court concluded that Twitter had not aided and abetted terrorism.

    In an unsigned opinion issued today, the court said the underlying complaints in Gonzalez were weak, regardless of Section 230’s applicability. The case involved the family of a woman killed in a terrorist attack suing Google, which the family claimed had violated the law by recommending terrorist content on YouTube. They sought to hold Google liable under anti-terrorism laws.

    Read Article >
  • Guns, banks, and 280 characters.

    During a brief rebuttal, Twitter tried to weaken some very colorful hypotheticals made throughout the day (like Osama Bin Laden walking into a bank). Waxman did not get to fully complete his thought before the court adjourned, but he seemed to be trying to suggest the absurdity of comparing Twitter’s functions to giving direct material assistance to a known terrorist who walks through your door.

    “There are 545 pages in this complaint and there are 4 that mention recommendations,” Waxman said.

  • An old Twitter statement is coming back to haunt it.

    In 2014 Mother Jones wrote that Twitter was deliberately avoiding taking aim at ISIS, quoting a Twitter official saying that “one man’s terrorist is another man’s freedom fighter.” Taamneh attorney Eric Schnipper brought the quote back up today — this time to argue that Twitter should in fact be found liable for supporting terrorists.

  • You wouldn’t sell Osama Bin Laden a hospital.

    Twitter has been making the case that aiding and abetting requires helping a specific terrorist attack, but justices are referencing cases that suggest aiding the enterprise at all is indefensible — even if it’s not specifically used for an attack.

  • Justice Kavanaugh finally introduces the First Amendment to the conversation.

    The subject of speech has been notably absent from today’s arguments. Kavanaugh asks if interviewing a terrorist leader on TV would provide the same material support as a Twitter recommendation. “I think the First Amendment would solve that problem,” says Schnapper.

  • Justice Jackson makes today’s first mention of “recommendations.”

    We’re now in the final stretch. Eric Schnapper, who also presented arguments for the plaintiffs yesterday in Gonzalez v. Google, focused a lot on the potential of recommendation functions to cause harm. In today’s case, plaintiffs allege Twitter recommendations helped ISIS generally recruit more fighters. Schnapper concedes this doesn’t have anything to do with a specific attack — a standard Twitter is claiming plaintiffs would have to meet.

  • Justice Gorsuch: “in a very abstract way in the world, everything is connected to everything else.”

    We’re deep in the dictionary definitions of things with Justice Gorsuch, who is exploring personal identity. I will admit I was not expecting this line of inquiry, but if you’ll excuse me, I’m off to rewatch I Heart Huckabees.

  • The court wonders if Twitter is as valuable as the services provided by a bank.

    “Let’s say a known terrorist walks into a bank,” says Justice Kagan, before getting into the weeds of a “knowing your customer” hypothetical.

    The government expresses skepticism that the ability to tweet is as valuable as having a place to store money. As a former power user of Twitter, I would have to agree.

    (Incidentally, Elon Musk has expressed plans to turn Twitter into an actual bank, so this distinction might not work in future lawsuits.)

  • Justice Thomas digs up PageNet and the era of ubiquitous pagers.

    Edwin Kneedler, on behalf of the government, faces a theoretical question on pager companies providing services to terrorists. It’s not clear how this act of technological paleontology from Thomas will provide a comparison to Twitter. But it’s interesting by virtue of the fact that while pagers could be used directly to plan an attack, pager companies probably knew way less about their customers’ beliefs and actions than a modern social platform.

  • Metaphor update: we’ve moved onto stolen jewelry smelters and their bookkeepers.

    Thus concluding Twitter’s arguments. Now it’s the US government’s turn.

  • Justice Barrett presses Twitter to acknowledge the nature of ISIS.

    Barrett suggests to Waxman that it’s obvious what ISIS is about and what it will do in the future: “If you know ISIS is using [Twitter], you know ISIS is going to be doing bad things… what work does turning your focus on the specific act do? Aiding ISIS is aiding the commission of specific acts in the future.”

    Building on previous questioning from Justice Kagan, it seems like the court is trying to get Twitter to draw a line about how willfully dumb it can play about certain accounts and users on the platform.

  • Justice Gorsuch: “are you sure you want to do that?”

    Receiving laughs from the gallery, Gorsuch presses Waxman on whether Twitter has read the law incorrectly. “I can’t help but wonder if some of the struggle you’ve had this morning … comes from your reading of the text.” Waxman has been arguing that they have to support an act, not just the person behind it — “that seems a pretty abstract way to read the statute.”

    Waxman tries to explain, but Gorsuch isn’t impressed: “Maybe we oughta just stop.”

  • The court isn’t letting Twitter “wipe its hands” over terrorist content.

    Justice Kagan alludes to the Elon Musk school of Twitter: is Twitter liable if its policy is “let a thousand flowers bloom?” Waxman still says no. “If they said, we don’t want our platforms to be used to support terrorist groups or terrorist acts, but they don’t do anything to enforce it,” he claims, they’re not aiding and abetting.

    Kagan seems extremely unconvinced. “You’re helping by providing your service to those people with the explicit knowledge that those people are using it to advance terrorism.” 

  • Justice Sotomayor helpfully explains Twitter’s own position to Twitter.


    “There is a focus on how much your platform helped ISIS, and less on how you actually helped them, and there is a difference between the two things. … [Your argument is that] in a neutral business setting, using something that is otherwise not criminal, a platform, to communicate with people, and you’re doing it not by as in the bank situation or pharmaceutical situation, to help this particular person to commit a crime, but in a general business situation that others are coming to you and you can’t find them ahead of time, that that doesn't constitute substantiality.”

  • Justice Alito: “If this was a criminal case I think it’s clear it would not be aiding and abetting liability.”

    But that doesn’t get Twitter off the hook here, as the court is questioning Waxman about whether the company’s conduct meets the Halberstam standard for liability. Here are the basics of that standard:

    (1) “the party whom the defendant aids must perform a wrongful act that causes an injury”; (2) “the defendant must be generally aware of his role as part of an overall illegal or tortious activity at the time that he provides the assistance”; and (3) “the defendant must knowingly and substantially assist the principal violation.”

  • You wouldn’t download a sheep.

    Twitter’s counsel, Seth Waxman, is up first in oral arguments. If his first performance is a preview of what’s to come, we’re going to hear a lot of weird metaphors about criminal activity. Waxman is clearly angling toward the idea that Twitter had to specifically know what criminals on the platform were going to do to be liable for their actions.

    Justice Thomas and Waxman opened with dueling hypotheticals about respectively giving your friend a gun and breaking a padlock to steal your neighbor’s sheep. Sure.

  • Side note: the court jokes about being bad at technology, but its livestream quality isn’t funny.

    It’s entirely fair that the Supreme Court doesn’t want to broadcast video from its hearing room, but the American people deserve a serious upgrade to its audio livestream capabilities. Unless you’re an expert on the court it’s often difficult to tell who is speaking at any given time, because there’s no live transcript or indication of the current speaker.

  • Twitter was once “the free speech wing of the free speech party.” Now Elon Musk is in charge.

    Twitter’s moderation practices are a subject of today’s case before the Supreme Court, and things have only gotten sketchier and more chaotic since a new Chief Twit took over the company.

    Musk may have inherited this legal mess when he bought Twitter for $44 billion, but now he literally owns it. It will be interesting to see if the company’s new reputation under Musk will give the plaintiffs an edge in arguments.

  • Join us again and listen to the Supreme Court consider the future of the internet.

    The justices will reconvene at 10AM ET and oral arguments will begin shortly after in Twitter v Taamneh. If you need to catch up on the action in Gonzalez v. Google, check out yesterday’s coverage from Adi.

    You can listen along live to today’s arguments here as we post updates throughout the hearing:

  • “You can’t call it neutral once the defendant knows its algorithm is doing it.”

    That’s the last word from the plaintiffs in rebuttal, and it’s a statement worth chewing on. There’s a lot to think about here, but the court is now adjourned until tomorrow’s arguments in Twitter v. Taamneh. Stay tuned for more coverage, and thanks for joining us!

  • “How do you operate a website if you don’t have a homepage?”

    We agree, Google. Check us out on the web:

  • Google: it’s not helpful when states make their own decisions that affect us.

    This might be too obvious to point out, but national and international internet organizations often say they experience substantial hardship when laws are fragmented. That’s why California and the EU have been so instrumental in leading the way on internet regulation; it can be easier for platform giants to simply harmonize the rules everywhere based on the strictest regulation in one place, rather than forking their platform and policies to comply with a bunch of localities.

    Google and every other big platform does not want to be subject to an even greater patchwork of laws, which could be an outcome of 230 being weakened.

  • Google: what do you want on the internet? The Truman Show or a horror show?

    Google has come out swinging, pushing back fiercely against the court for “incorrect” premises in its questioning. One colorful moment that just happened: Blatt offering a hypothetical if 230 gets overturned.

    According to Google, it’s a land of extremes. We’ll either live in The Truman Show, where everyone moderates everything into oblivion, or a horror show, where nobody moderates anything. These are not hyperbolic examples — it’s exactly the question at the heart of 230 protections.

  • The court fairly asks Google: whose recommendation is a “recommendation?”

    Google’s attorney Lisa S. Blatt is now up in the final hour of arguments, and she’s already getting some pointed questions — off the bat, who is really responsible for one of YouTube’s recommendations? The court suggests it’s not the user, who merely uploaded content and is not responsible for how the overall system works.

  • Elon Musk has weirdly created a useful endpoint in an algorithmic spectrum.

    DOJ pointed out during arguments that when a computer is doing things there is “no live human being” making a choice, at least on an individual basis. And that’s true when large teams of people are making distributed and collective decisions.

    In the case of Twitter, however, we now have an example of what happens when one man explicitly turns the knobs in a certain direction.