Skip to main content

Filed under:

The Supreme Court hears arguments for two cases that could reshape the future of the internet

Share this story

The Supreme Court is hearing oral arguments for Gonzalez v. Google, a potentially landmark reinterpretation of Section 230 of the Communications Decency Act, and Twitter v. Taamneh, a case about anti-terrorist sanctions. The cases could shape the future of the internet, as plaintiffs seek to hold Google and other companies liable for recommending terrorist content on platforms like YouTube. But a major narrowing of Section 230 could affect everyone from tech giants to Wikipedia editors and Reddit mods.

  • Guns, banks, and 280 characters.

    During a brief rebuttal, Twitter tried to weaken some very colorful hypotheticals made throughout the day (like Osama Bin Laden walking into a bank). Waxman did not get to fully complete his thought before the court adjourned, but he seemed to be trying to suggest the absurdity of comparing Twitter’s functions to giving direct material assistance to a known terrorist who walks through your door.

    “There are 545 pages in this complaint and there are 4 that mention recommendations,” Waxman said.



  • An old Twitter statement is coming back to haunt it.

    In 2014 Mother Jones wrote that Twitter was deliberately avoiding taking aim at ISIS, quoting a Twitter official saying that “one man’s terrorist is another man’s freedom fighter.” Taamneh attorney Eric Schnipper brought the quote back up today — this time to argue that Twitter should in fact be found liable for supporting terrorists.



  • You wouldn’t sell Osama Bin Laden a hospital.

    Twitter has been making the case that aiding and abetting requires helping a specific terrorist attack, but justices are referencing cases that suggest aiding the enterprise at all is indefensible — even if it’s not specifically used for an attack.



  • Justice Kavanaugh finally introduces the First Amendment to the conversation.

    The subject of speech has been notably absent from today’s arguments. Kavanaugh asks if interviewing a terrorist leader on TV would provide the same material support as a Twitter recommendation. “I think the First Amendment would solve that problem,” says Schnapper.



  • Justice Jackson makes today’s first mention of “recommendations.”

    We’re now in the final stretch. Eric Schnapper, who also presented arguments for the plaintiffs yesterday in Gonzalez v. Google, focused a lot on the potential of recommendation functions to cause harm. In today’s case, plaintiffs allege Twitter recommendations helped ISIS generally recruit more fighters. Schnapper concedes this doesn’t have anything to do with a specific attack — a standard Twitter is claiming plaintiffs would have to meet.



  • Justice Gorsuch: “in a very abstract way in the world, everything is connected to everything else.”

    We’re deep in the dictionary definitions of things with Justice Gorsuch, who is exploring personal identity. I will admit I was not expecting this line of inquiry, but if you’ll excuse me, I’m off to rewatch I Heart Huckabees.



  • The court wonders if Twitter is as valuable as the services provided by a bank.

    “Let’s say a known terrorist walks into a bank,” says Justice Kagan, before getting into the weeds of a “knowing your customer” hypothetical.

    The government expresses skepticism that the ability to tweet is as valuable as having a place to store money. As a former power user of Twitter, I would have to agree.

    (Incidentally, Elon Musk has expressed plans to turn Twitter into an actual bank, so this distinction might not work in future lawsuits.)



  • Justice Thomas digs up PageNet and the era of ubiquitous pagers.

    Edwin Kneedler, on behalf of the government, faces a theoretical question on pager companies providing services to terrorists. It’s not clear how this act of technological paleontology from Thomas will provide a comparison to Twitter. But it’s interesting by virtue of the fact that while pagers could be used directly to plan an attack, pager companies probably knew way less about their customers’ beliefs and actions than a modern social platform.



  • Metaphor update: we’ve moved onto stolen jewelry smelters and their bookkeepers.

    Thus concluding Twitter’s arguments. Now it’s the US government’s turn.



  • Justice Barrett presses Twitter to acknowledge the nature of ISIS.

    Barrett suggests to Waxman that it’s obvious what ISIS is about and what it will do in the future: “If you know ISIS is using [Twitter], you know ISIS is going to be doing bad things… what work does turning your focus on the specific act do? Aiding ISIS is aiding the commission of specific acts in the future.”

    Building on previous questioning from Justice Kagan, it seems like the court is trying to get Twitter to draw a line about how willfully dumb it can play about certain accounts and users on the platform.



  • Justice Gorsuch: “are you sure you want to do that?”

    Receiving laughs from the gallery, Gorsuch presses Waxman on whether Twitter has read the law incorrectly. “I can’t help but wonder if some of the struggle you’ve had this morning … comes from your reading of the text.” Waxman has been arguing that they have to support an act, not just the person behind it — “that seems a pretty abstract way to read the statute.”

    Waxman tries to explain, but Gorsuch isn’t impressed: “Maybe we oughta just stop.”



  • The court isn’t letting Twitter “wipe its hands” over terrorist content.

    Justice Kagan alludes to the Elon Musk school of Twitter: is Twitter liable if its policy is “let a thousand flowers bloom?” Waxman still says no. “If they said, we don’t want our platforms to be used to support terrorist groups or terrorist acts, but they don’t do anything to enforce it,” he claims, they’re not aiding and abetting.

    Kagan seems extremely unconvinced. “You’re helping by providing your service to those people with the explicit knowledge that those people are using it to advance terrorism.” 



  • Justice Sotomayor helpfully explains Twitter’s own position to Twitter.

    Sotomayor:

    “There is a focus on how much your platform helped ISIS, and less on how you actually helped them, and there is a difference between the two things. … [Your argument is that] in a neutral business setting, using something that is otherwise not criminal, a platform, to communicate with people, and you’re doing it not by as in the bank situation or pharmaceutical situation, to help this particular person to commit a crime, but in a general business situation that others are coming to you and you can’t find them ahead of time, that that doesn't constitute substantiality.”



  • Justice Alito: “If this was a criminal case I think it’s clear it would not be aiding and abetting liability.”

    But that doesn’t get Twitter off the hook here, as the court is questioning Waxman about whether the company’s conduct meets the Halberstam standard for liability. Here are the basics of that standard:

    (1) “the party whom the defendant aids must perform a wrongful act that causes an injury”; (2) “the defendant must be generally aware of his role as part of an overall illegal or tortious activity at the time that he provides the assistance”; and (3) “the defendant must knowingly and substantially assist the principal violation.”



  • You wouldn’t download a sheep.

    Twitter’s counsel, Seth Waxman, is up first in oral arguments. If his first performance is a preview of what’s to come, we’re going to hear a lot of weird metaphors about criminal activity. Waxman is clearly angling toward the idea that Twitter had to specifically know what criminals on the platform were going to do to be liable for their actions.

    Justice Thomas and Waxman opened with dueling hypotheticals about respectively giving your friend a gun and breaking a padlock to steal your neighbor’s sheep. Sure.



  • Side note: the court jokes about being bad at technology, but its livestream quality isn’t funny.

    It’s entirely fair that the Supreme Court doesn’t want to broadcast video from its hearing room, but the American people deserve a serious upgrade to its audio livestream capabilities. Unless you’re an expert on the court it’s often difficult to tell who is speaking at any given time, because there’s no live transcript or indication of the current speaker.



  • Twitter was once “the free speech wing of the free speech party.” Now Elon Musk is in charge.

    Twitter’s moderation practices are a subject of today’s case before the Supreme Court, and things have only gotten sketchier and more chaotic since a new Chief Twit took over the company.

    Musk may have inherited this legal mess when he bought Twitter for $44 billion, but now he literally owns it. It will be interesting to see if the company’s new reputation under Musk will give the plaintiffs an edge in arguments.



  • Join us again and listen to the Supreme Court consider the future of the internet.

    The justices will reconvene at 10AM ET and oral arguments will begin shortly after in Twitter v Taamneh. If you need to catch up on the action in Gonzalez v. Google, check out yesterday’s coverage from Adi.

    You can listen along live to today’s arguments here as we post updates throughout the hearing:



  • “You can’t call it neutral once the defendant knows its algorithm is doing it.”

    That’s the last word from the plaintiffs in rebuttal, and it’s a statement worth chewing on. There’s a lot to think about here, but the court is now adjourned until tomorrow’s arguments in Twitter v. Taamneh. Stay tuned for more coverage, and thanks for joining us!



  • “How do you operate a website if you don’t have a homepage?”

    We agree, Google. Check us out on the web: www.theverge.com.



  • Google: it’s not helpful when states make their own decisions that affect us.

    This might be too obvious to point out, but national and international internet organizations often say they experience substantial hardship when laws are fragmented. That’s why California and the EU have been so instrumental in leading the way on internet regulation; it can be easier for platform giants to simply harmonize the rules everywhere based on the strictest regulation in one place, rather than forking their platform and policies to comply with a bunch of localities.

    Google and every other big platform does not want to be subject to an even greater patchwork of laws, which could be an outcome of 230 being weakened.



  • Google: what do you want on the internet? The Truman Show or a horror show?

    Google has come out swinging, pushing back fiercely against the court for “incorrect” premises in its questioning. One colorful moment that just happened: Blatt offering a hypothetical if 230 gets overturned.

    According to Google, it’s a land of extremes. We’ll either live in The Truman Show, where everyone moderates everything into oblivion, or a horror show, where nobody moderates anything. These are not hyperbolic examples — it’s exactly the question at the heart of 230 protections.



  • The court fairly asks Google: whose recommendation is a “recommendation?”

    Google’s attorney Lisa S. Blatt is now up in the final hour of arguments, and she’s already getting some pointed questions — off the bat, who is really responsible for one of YouTube’s recommendations? The court suggests it’s not the user, who merely uploaded content and is not responsible for how the overall system works.



  • Elon Musk has weirdly created a useful endpoint in an algorithmic spectrum.

    DOJ pointed out during arguments that when a computer is doing things there is “no live human being” making a choice, at least on an individual basis. And that’s true when large teams of people are making distributed and collective decisions.

    In the case of Twitter, however, we now have an example of what happens when one man explicitly turns the knobs in a certain direction.



  • I really want to unpack this Venn diagram.

    The DOJ is threading a needle here between respecting the expansive possibilities of Section 230 on one side and fields like antitrust law on the other. When speaking about algorithmic recommendations, DOJ says “I don’t know if we would call it the platform’s own speech but the platform’s own conduct.” I’m very curious to hear more about the overlap of “speech” and “conduct” here since a distinction has been drawn!



  • The court keeps talking about “neutral tools,” which is a problem in itself.

    We’ve heard this a few times already: the court referring to an algorithm operating on “neutral terms”. Justice Gorsuch just poked a big hole in that idea by noting “some [algorithms] might even favor one point of view over another,” for example, by privileging revenue motives.

    Indeed, there is no such thing as a “neutral” algorithm. They are all built by human beings with various and competing motivations and intents.

    This is one of the more exciting Supreme Court oral argument sessions on tech in a while!



  • The court is looking for a line to draw.

    And yeah, that’s the whole point of this case: what does Section 230 really protect? Does it have limits? What are the limits? Still, it’s helpful that Justice Sotomayor said it out loud: “let’s assume we’re looking for a line, because it’s clear from our questions that we are.”

    She also added that the court is “uncomfortable” with a line that says “merely recommending something without adornment” could constitute defamation.



  • What does it really mean to “post” something?

    The court is now getting into the weeds of what it means to “post” something. DOJ is doing a decent job of unpacking this, but it’s still more nuanced than the conversation suggests so far. The question is really: if someone posts something to YouTube, and YouTube knows what it is explicitly, and refuses to take it down, is YouTube also “posting” it?

    I’m calling this The Poster’s Dilemma.



  • The court imagines a litigation dystopia.

    Justice Kavanaugh, questioning Malcom Stewart from the DOJ:

    I don’t know how many employment decisions are made in the country every day, but I know that hundreds of millions, billions responses of inquiries on the internet are made every day. … under your view, every one of those would be the possibility of a lawsuit.



  • I’m feeling cautiously optimistic about today’s lines of questioning.

    Supreme Court justices are notoriously clever about the questions they ask, and they’ll often ask questions during oral arguments that belie their true feelings about the subject matter. But, so far today, each member of the court who has asked questions has seemed pretty skeptical about the idea that Section 230 should be obliterated because of YouTube’s thumbnails.

    We’ll see what happens, of course, but today’s arguments have been exceptional in the sense that the government seems to be employing more wisdom than we usually see when interrogating technology. (Adi says she’s reserving judgment until she sees how weird their questions to Google are.)



  • Justice Kavanaugh demands answers on the economy.

    Kavanaugh notes that the court received a lot of concern in amicus curiae briefs that meddling with Section 230 would have devastating effects on the economy — something he says the court needs to take quite seriously. Plaintiffs didn’t have a great answer for this, vaguely noting that lots of things would still be protected if they get their way.
    Plaintiffs:

    Most recommendations just aren’t actionable. there is no cause of action for telling someone to look at a book that has something defamatory in it. 



  • Justice Gorsuch opens the Pandora’s Box of artificial intelligence.

    The Supreme Court is likely to face battles over AI search in the future, and today we’ve gotten our first signal that it’s already on the court’s radar. Justice Gorsuch noted that AI is already capable of creating new things based on the wealth of content already available on the internet.



  • “We’re a court. We really don’t know about these things. These are not the nine greatest experts on the internet.”

    I love this honesty from Justice Kagan, who is expressing extreme skepticism on the suggestion that the court ought to strip protection from companies operating on the internet.

    “Isn’t that something for Congress, not the court?”

    Correction: The line in question was from Justice Kagan, not Justice Sotomayor, as originally attributed.



  • Dating apps: too abstract for the plantiffs.

    Justice Sotomayor asks:

    If you write an algorithm for someone that in its structure ensures the discrminiation between people, a dating app, for example. … someone says “i’m going to create an algorithm that inherently discriminates against people.” you would say that internet provider is discriminating, correct?

    Apparently this stumped the plaintiffs, who declared this hypothetical too abstract to respond to. Strange, considering the YouTube algorithm is probably more complicated than this scenario.



  • Justice Alito: “I’m afraid I’m completely confused by whatever argument you’re making at the present time.”

    That’s it. That’s the whole post.



  • “I don’t understand how a neutral suggestion about something you’ve expressed an interest in is aiding and abetting. I just don’t understand it.”

    Justice Thomas sanely rebuts the plaintiff’s argument that simply providing a phone number of a terrorist in a search result constitutes “aiding and abetting” an enemy. Even the creation of URLs seems up for grabs here, according to plaintiffs. Talk about blowing up the internet!



  • “The only aiding and abetting that you’re arguing is the recommendation.”

    Justice Sotomayor asks a pointed question about whether recommending content is the same as helping people connect via chatrooms — basically an interrogation of whether algorithms, intentionally, connect people with radicals. This question will likely be explored in more detail in tomorrow’s case. Here’s Justice Sotomayor:

    I can really see that an internet provider who was in cahoots with ISIS provided them with an algorithm that would take anybody in the world and find them for them, and do recruiting of people by showing them other videos that would lead them to ISIS, that’s an intentional act, and I could see 230 not going that far. The question is, how do you get yourself from a neutral algorithm to an aiding and abetting? An intent, knowledge… there has to be some intent to aid and abet.



  • Justice Kagan: everything’s an algorithm, right?

    Justice Elena Kagan does a good job of summing up the big question here after the opening question from Clarence Thomas. “This was a pre-algorithm statute, and everyone is trying their best to figure out how this statute applies,” Kagan notes. “Every time anyone looks at anything on the internet, there is an algorithm involved.”



  • Justice Thomas kicks things off by diving right into the algorithm.

    Asking about whether the algorithm is the same for cooking videos as all other content, Justice Thomas is first up in today’s questioning. A long-silent member of the Supreme Court, The New York Times pointed out in 2021 that he has become far more vocal.



  • The Supreme Court battle for Section 230 has begun

    The future of recommendation algorithms could be at stake.


  • Here’s the live feed of this morning’s Section 230 Supreme Court hearing.

    The court is hearing Gonzalez v. Google, one of the biggest tech law cases in years, at 10am ET. You can livestream the audio if you want to tune in — and we’ll have coverage of Gonzalez and its sister case Twitter v. Taamneh over the coming day and week.


    Live Oral Argument Audio

    [www.supremecourt.gov]