The Supreme Court has declined to consider reinterpreting foundational internet law Section 230, saying it wasn’t necessary for deciding the terrorism-related case Gonzalez v. Google. The ruling came alongside a separate but related ruling in Twitter v. Taamneh, where the court concluded that Twitter had not aided and abetted terrorism.
In an unsigned opinion issued today, the court said the underlying complaints in Gonzalez were weak, regardless of Section 230’s applicability. The case involved the family of a woman killed in a terrorist attack suing Google, which the family claimed had violated the law by recommending terrorist content on YouTube. They sought to hold Google liable under anti-terrorism laws.
The court dismissed the complaint largely because of its unanimous ruling in Twitter v. Taamneh. Much like in Gonzalez, a family alleged that Twitter knowingly supported terrorists by failing to remove them from the platform before a deadly attack. In a ruling authored by Justice Clarence Thomas, however, the court declared that the claims were “insufficient to establish that these defendants aided and abetted ISIS” for the attack in question. Thomas declared that Twitter’s failure to police terrorist content failed the requirement for some “affirmative act” that involved meaningful participation in an illegal act.
“If aiding-and-abetting liability were taken too far, then ordinary merchants could become liable for any misuse of their goods and services, no matter how attenuated their relationship with the wrongdoer,” writes Thomas. That includes “those who merely deliver mail or transmit emails” becoming liable for the contents of those messages or even people witnessing a robbery becoming liable for the theft. “There are no allegations that defendants treated ISIS any differently from anyone else. Rather, defendants’ relationship with ISIS and its supporters appears to have been the same as their relationship with their billion-plus other users: arm’s length, passive, and largely indifferent.”
Thomas compared social platforms like Twitter to other older forms of communication:
“It might be that bad actors like ISIS are able to use platforms like defendants’ for illegal — and sometimes terrible — ends. But the same could be said of cell phones, email, or the internet generally. Yet, we generally do not think that internet or cell service providers incur culpability merely for providing their services to the public writ large. Nor do we think that such providers would normally be described as aiding and abetting, for example, illegal drug deals brokered over cell phones—even if the provider’s conference-call or video-call features made the sale easier.”
For Gonzalez v. Google, “the allegations underlying their secondary-liability claims are materially identical to those at issue in Twitter,” says the court. “Since we hold that the complaint in that case fails to state a claim for aiding and abetting ... it appears to follow that the complaint here likewise fails to state such a claim.” Because of that, “we therefore decline to address the application of §230 to a complaint that appears to state little, if any, plausible claim for relief.”
Google’s public policy team issued a statement on Twitter about the decision. “Countless companies, scholars, creators and civil society groups who joined with us in this case will be reassured by this result. We’ll continue to safeguard free expression online, combat harmful content, and support businesses and creators who benefit from the internet,” it said.
The Gonzalez ruling is short and declines to deal with many of the specifics of the case. But the Twitter ruling does take on a key question from Gonzalez: whether recommendation algorithms constitute actively encouraging certain types of content. Thomas appears skeptical:
“To be sure, plaintiffs assert that defendants’ ‘recommendation’ algorithms go beyond passive aid and constitute active, substantial assistance. We disagree. By plaintiffs’ own telling, their claim is based on defendants’ ‘provision of the infrastructure which provides material support to ISIS.’ Viewed properly, defendants’ ‘recommendation’ algorithms are merely part of that infrastructure. All the content on their platforms is filtered through these algorithms, which allegedly sort the content by information and inputs provided by users and found in the content itself. As presented here, the algorithms appear agnostic as to the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content. The fact that these algorithms matched some ISIS content with some users thus does not convert defendants’ passive assistance into active abetting. Once the platform and sorting-tool algorithms were up and running, defendants at most allegedly stood back and watched; they are not alleged to have taken any further action with respect to ISIS.”
The interpretation may deal a blow to one common argument for adding special liability to social media: the claim that recommendation systems go above and beyond simply hosting content and explicitly encourage that content. This ruling’s reasoning suggests that simply recommending something on an “agnostic” basis — as opposed to, in one hypothetical from Thomas, creating a system that “consciously and selectively chose to promote content provided by a particular terrorist group” — isn’t an active form of encouragement.
The decision was praised by civil liberties activists. “We are pleased that the Court did not address or weaken Section 230, which remains an essential part of the architecture of the modern internet and will continue to enable user access to online platforms,” said Electronic Frontier Foundation civil liberties director David Greene in a statement following the ruling. “We also are pleased that the Court found that an online service cannot be liable for terrorist attacks merely because their services are generally used by terrorist organizations the same way they are used by millions of organizations around the globe.”
Supreme Court justices have indicated for some time that they wish to reexamine the liability of online platforms. But in a set of oral arguments earlier this year, justices seemed reluctant to change Section 230, questioning whether it would upend core pieces of online communication. Today’s rulings likely won’t be the final say on the legal status of internet services, though — the court has previously indicated interest in pursuing a pair of cases covering laws banning internet moderation in Texas and Florida.
Update 2PM ET: Added statement from Google.