Democratic lawmakers want social networks to face legal liability if they recommend harmful content to users. Reps. Anna Eshoo (D-CA), Frank Pallone Jr. (D-NJ), Mike Doyle (D-PA), and Jan Schakowsky (D-IL) introduced the “Justice Against Malicious Algorithms Act,” which would amend Section 230’s protections to exclude “personalized recommendations” for content that contributes to physical or severe emotional injury.
The bill follows a recommendation Facebook whistleblower Frances Haugen made before Congress last week. Haugen, a former employee who leaked extensive internal Facebook research, encouraged lawmakers to crack down on algorithms that promote, rank, or otherwise order content based on user engagement. It applies to web services with over 5 million monthly visitors and excludes certain categories of material, including infrastructure services like web hosting and systems that return search results.
For platforms that are covered, the bill targets Section 230 of the Communications Decency Act, which prevents people from suing web services over third-party content that users post. The new exception would let these cases proceed if the services knowingly or recklessly used a “personalized algorithm” to recommend the third-party content in question. That could include posts, groups, accounts, and other user-provided info.
The bill wouldn’t necessarily let people sue over the kinds of material Haugen criticized, which include hate speech and anorexia-related content. Much of that material is legal in the United States, so platforms don’t need an additional liability shield for hosting it. (A Pallone statement also castigated sites for promoting “extremism” and “disinformation,” which aren’t necessarily illegal either.) The bill also only covers personalized recommendations, defined as sorting content with an algorithm that “relies on information specific to an individual.” Companies could seemingly still use large-scale analytics to recommend the most popular general content.
In her testimony, Haugen suggested that the goal was to add general legal risk until Facebook and similar companies stopped using personalized recommendations altogether. “If we reformed [Section] 230 to make Facebook responsible for the consequences of their intentional ranking decisions, I think they would get rid of engagement-based ranking,” she said.