Well, we had another hearing with the platform CEOs.
The dream with this sort of thing is that Congress shows up with a full command of the issues, and asks the CEOs good-faith questions about matters of policy and law. And then I’d come along at the end of the day to walk you through the more provocative questions and productive answers, and gesture at what likely policy outcomes we could expect from this exercise in representative democracy.
But “Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?,” a hearing of the Senate Committee on Commerce, Science, and Transportation, was not that kind of exercise. The word “sham” got kicked around a lot, especially by the participants. “Stunt,” too. Some of the Democrats declined to ask any questions at all.
It was not the first of these. In April 2018, House Republicans organized a hearing to investigate why the two conservative vloggers known as Diamond and Silk had experienced a decline in traffic sent to them by Facebook. The most likely explanation was that changes to Facebook’s algorithms often affect traffic patterns, to publishers of all kinds, although we would later learn that the changes made in 2017 largely benefited conservative publishers at the expense of more liberal ones.
In fact, most of the research I have read has suggested that conservatives reap outsized benefits from the existence of social media, which provides ample room for their views to make regular end runs around the mass media. On Wednesday morning, Media Matters published results of a nine-month study showing that right- and left-leaning pages generate engagement at similar rates — but that right-leaning pages generated 43 percent of total interactions by pages posted about American politics, despite making up only 26 percent of posts.
But the platforms are big, and make mistakes, and those mistakes turn into anecdotes. Anecdotes can be merged into a working theory about platform governance, such as that the platforms are biased against conservatives.
And so less than a week before the election, with their candidate trailing in polls and an effort to shake up the race with a story that dozens of former intelligence officials say is likely a Russian disinformation campaign failing to gain traction, Senate Republicans held a hearing to complain about the unfairness of it all.
The theatrics, which often devolved into shouting, meant that the topic of the hearing — the future of a legal shield for online platforms — was barely debated. The event had been billed as a discussion about Section 230 of the Communications Decency Act, a law that protects social media companies from liability for what their users post and is regarded as sacrosanct by the platforms. […]
But the hearing’s barbed exchanges pointed to how the debate over online speech has become increasingly divided, with the companies caught in the middle. Of the 81 questions asked by Republicans, 69 were about censorship and the political ideologies of the tech employees responsible for moderating content, according to a tally by The New York Times. Democrats asked 48 questions, mostly about regulating the spread of misinformation related to the election and the coronavirus pandemic.
More than one observer noted that the main point of the hearing seemed to be to generate clips of Republicans looking pugnacious in the face of hated Silicon Valley elites, which they could then distribute on those elites’ own platforms. (“Basically a TikTok house for politicians,” in the words of Protocol’s David Pierce.) This seemed especially true of Sen. Ted Cruz, R-TX, who had promoted the fight on Twitter with a UFC-style infographic promising, in all caps, a FREE SPEECH SHOWDOWN. And, sure enough, his timeline today includes at least 19 clips of his sparring with Twitter CEO Jack Dorsey, including one that Cruz pinned to the top of his page for long-term viewing.
In the face of so much bad-faith arguing, I could not help but feel roused when Sen. Brian Schatz (D-HI) called the hearing “a scar on this committee.” “What we are seeing today is an attempt to bully the CEOs of private companies into carrying out a hit job on a presidential candidate by making sure they push out foreign and domestic misinformation meant to influence the election,” Schatz said, and I’m tempted to just leave the whole thing there.
Except that I can’t, because the question over Section 230 and how the internet ought to be regulated is one of the most important debates facing the tech industry. (If you’re unfamiliar with the law and the many controversies around it, I wrote an in-depth explainer earlier this year.)
Among Republicans, Democrats, and tech CEOs, there is agreement that the law is showing its age, and in need of updating. (Even if each group would amend it in very different ways.) And if you sweep away all the bad-faith arguments and even worse policy proposals, you’re left with genuine questions about power and responsibility. What speech should tech platforms be allowed to host, and to amplify? When they err, what is a just response? When a citizen is terrorized by online harassment, what recourse should they have?
From these broad questions, you might derive a basic set of principles. But that’s not enough to craft policy or law. To get there, you have to start asking really nettlesome questions.
Facebook, Google, and Twitter have signaled varying degrees of support for amending Section 230. Facebook has gone the furthest, suggesting that Congress set performance targets for the speedy removal of illegal content and requiring platforms to comply with them. Google and Twitter, by contrast, have encouraged restraint, noting that the ripple effects of such a change could be broad. (As Adi Robertson notes in this thread, changes to Section 230 could require newspapers to close their comments section, or consumer complaints sites to shut down completely.)
In fact, the last time Section 230 was amended — with Facebook’s full support — the ripples were broad and destructive.
The 2018 FOSTA-SESTA law, nominally designed to curb sex trafficking, resulted in many online personals sites shutting down completely over liability fears. Its aftermath confirmed what academics had long warned: that the most predictable effect of limiting Section 230 would be to prompt platforms to over-moderate themselves, limiting speech on the internet.
FOSTA-SESTA did not come up once today’s hearing — even though, in a sane world, that’s where the hearing would have begun.
Next time around, it won’t be the personals sites that suffer from Section 230 reform — they’re already gone. Nor is it likely to be Facebook, or Google, or Twitter, all of whom have the resources to adapt to whatever changes come their way. (Twitter has the fewest resources of the three, but it uses the same centralized moderation model that its peers do.)
Instead, the victims are likely to look more like Reddit, which relies on volunteers to help moderate the site in a way that an amended Section 230 might no longer allow. “What would be super unfortunate is if we end up throwing out 230 in an effort to punish the largest internet players for their perceived or real abuse of their dominance,” Reddit’s general counsel, Benjamin Lee, told Protocol. “Unraveling 230 would basically further ensure that dominance, while undermining the ability of smaller companies like Reddit to challenge that dominance with alternative models of innovation.”
I still believe that Section 230 can be modernized in a way that makes the internet better. If Senate Republicans had their way, though, the internet would only become smaller.
This column was co-published with Platformer, a daily newsletter about big tech and democracy.