Skip to main content

Congress’s focus on content moderation has distracted it from the larger problem

Congress’s focus on content moderation has distracted it from the larger problem


When it comes to tech companies, size matters much more than Section 230

Share this story

Google Global Head of Intellectual Property Policy Katherine Oyama listens during a hearing with the House Communications and Technology and House Commerce Subcommittees on Capitol Hill on Wednesday in Washington, DC.
Google Global Head of Intellectual Property Policy Katherine Oyama listens during a hearing with the House Communications and Technology and House Commerce Subcommittees on Capitol Hill on Wednesday in Washington, DC.
Photo by Zach Gibson/Getty Images

What stays up on the internet, and what comes down? It’s a defining question of the age — and the subject of yesterday’s newsletter — and on Wednesday, it came to Congress.

The occasion was a hearing of the House Energy and Committee Commerce and its subcommittees on communications and technology and consumer protection and commerce. The intent was to “explore whether online companies are appropriately using the tools they have — including protections Congress granted in Section 230 of the Communications Decency Act — to foster a healthier Internet.” 

Section 230, as my colleague Adi Robertson noted earlier this year, is “one of the internet’s most important and most misunderstood laws.” These days, members of Congress typically describe it as a priceless free pass given to tech companies to exempt them from most legal requirements to moderate the content on their platforms. In fact, it was created to enable platforms to moderate content — and became law because Congress wanted tech companies to moderate more content than they previously were.

Jeff Kosseff, who wrote a book on 230, explained it to Robertson:

Then we get to these early internet services like CompuServe and Prodigy in the early ‘90s. CompuServe is like the Wild West. It basically says, “We’re not going to moderate anything.” Prodigy says, “We’re going to have moderators, and we’re going to prohibit bad stuff from being online.” They’re both, not surprisingly, sued for defamation based on third-party content.

CompuServe’s lawsuit is dismissed because what the judge says is, yeah, CompuServe is the electronic equivalent of a newsstand or bookstore. The court rules that Prodigy doesn’t get the same immunity because Prodigy actually did moderate content, so Prodigy is more like a newspaper’s letter to the editor page. So you get this really weird rule where these online platforms can reduce their liability by not moderating content.

That really is what triggered the proposal of Section 230. For Congress, the motivator for Section 230 was that it did not want platforms to be these neutral conduits, whatever that means. It wanted the platforms to moderate content.

And so the platforms continued to moderate content, eventually growing to an unimaginable size, and employing tens of thousands of moderators around the world. But owing in large part to their size, many bad things continue to take place on their servers: fraud, harassment, revenge porn, election interference, and so on.

It was in that context that Congress met today to complain about all of the things, and threaten to return us to a world where 230 did not exist, and — uh, platforms had no legal incentive to moderate content at all?

I regret to say that almost everything that followed was very dumb. This exchange, captured by a writer from Boston University, illustrates how. On one side you have Reddit CEO Steve Huffman, who describes the company’s hybrid approach to moderation: the company sets a “floor” of rules for users, but individual communities can raise the “ceiling” by adding additional rules that suit their needs.

And then on the other side you have an elected official rattling off empty action-movie one-liners:

Reddit’s Huffmann, in his submitted remarks, described how the company works: “The way Reddit handles content moderation today is unique in the industry. We use a governance model akin to our own democracy—where everyone follows a set of rules, has the ability to vote and self-organize, and ultimately shares some responsibility for how the platform works.” 

At least one committee member found that sort of approach far too weak.

“You better get serious about self-regulating,” Congressman Bill Johnson (R-Ohio) said to the panelists, “or you’re gonna force Congress to do something that you might not want to have done.”

You hear that, Huffman? If you don’t “get serious,” whatever that might mean, Congress might “do something.” Something “that you might not want to have done,” to boot!

It seems to me that if you were concerned about the balance of power between technology companies and their users in 2019, you might start with their enormous size and well documented anticompetitive behavior. Elsewhere in the government, to the credit of agencies like the Federal Trade Commission and the Justice Department, civil servants are doing just that. But to look at the unintended consequences of tech platforms and diagnose the cause as a law that incentivizes them to remove the bad stuff — well, maybe it’s Congress who better get serious.

All of this would be comical had lawmakers previously “done something” about Section 230, with awful results. Last year, Congress passed FOSTA-SESTA, a bill nominally intended to fight sex trafficking. It threatens any website owner with up to 10 years in prison for hosting even one instance of prostitution-related content. As a result, sites like Craigslist removed their entire online personals sections. Sex workers who had previously been working as their own bosses were driven back onto the streets, often forced to work for pimps. Prostitution-related crime in San Francisco alone — including violence against workers — more than tripled.

This is the kind of legislation you get from a Congress that is intent on doing something but too ignorant of technology, of history, and of the law to know what. I suppose that a hearing in which members ask technology companies to explain themselves is a step forward. I hope that Congress was listening, however little evidence there is that they were.

The Ratio

Today in news that could affect public perception of the big tech companies.

🔼 Trending up: Twitch is giving channel moderators more tools, and has promised to make enforcement actions more public. CEO Emmett Shear talks with The Verge’s Bijan Stephen about the moves.

🔽 Trending down: The Buffalo Chronicle, a Facebook page devoted to publishing false stories about Canadian politics, has had a recent string of viral successes spreading misinformation about Justin Trudeau.

🔃 Trending sideways: Mark Zuckerberg is giving an “unfiltered take” on free speech tomorrow via live Facebook video. He’s going to discuss big threats to free expression around the world, and apologized in advance for the length.


Democratic presidential candidates spent 15 minutes during Tuesday night’s debate mixing it up over what to do about Big Tech. The discussion — ranging from digital privacy to how to handle jobs eliminated by automation — illustrated how the tech backlash has moved to the center of mainstream political discussion. Here’s Recode’s Theodore Schleifer:

The combat mostly centered on Elizabeth Warren, the new presidential frontrunner who has made her proposal to break up tech companies like Facebook a cornerstone of her presidential run. Many of her competitors said they were not willing to go as far as her, although several decided to take their own whacks at Silicon Valley from other angles.

Beto O’Rourke offered the most direct criticism to Warren’s plan, even comparing her approach to Trump’s rhetoric about the press.

“We will be unafraid to break up big businesses if we have to do that — but I don’t think it is the role of a president or a candidate for the presidency to specifically call out which companies will be broken up,” O’Rourke said. “That’s something that Donald Trump has done in part because he sees enemies in the press and wants to diminish their power. It’s not something that we should do.”

The Democratic party is aggressively working to combat foreign election interference and coordinated misinformation campaigns ahead of the 2020 election. They’re using new software tools to track trending disinformation on Twitter, and asking candidates to keep an eye out for fake news related to the debates. (Ryan Lizza / Politico)

Elizabeth Warren says that unlike Facebook, TV networks will refuse ads with a ‘lie’ — but that’s not entirely true. Broadcast networks are generally required to run candidate ads under federal law. But issue ads, including the Trump campaign’s misleading one about Joe Biden and the Ukraine prosecutor, aren’t covered by the rule. (Amy Sherman / PolitiFact)

The Libra Association had its first official meeting in Geneva yesterday and elected a board of directors. The moment was supposed to be a passing of the baton from Facebook to its independent governing board, which is stacked with Facebook insiders. David Marcus told Bloomberg the association is trying to address legitimate concerns from regulators. (Kurt Wagner / Bloomberg)

Blizzard suspended three more Hearthstone players for showing their support of Hong Kong protestors during an official competition. The six-month suspension comes just over a week after the company suspended a professional player, Ng “Blitzchung” Wai Chung, for similar conduct. (Julia Alexander / The Verge)

China is racing to develop its own global cryptocurrency as Libra struggles to stay on course. The currency will be backed by the yuan and work with payment platforms including WeChat and Alipay, giving it an advantage over Facebook’s coin, which currently has no major payment platforms backing it in the United States. (Kate Rooney / CNBC)

The fallout from the Hong Kong protests has largely been covered as a story about Chinese influence. But is it actually an example of China’s weakness? The government has been flipping out about a tweet, an app, and a gamer — none of which seem to pose a significant threat. The question is ... why? (Zeynep Tufekci / The Atlantic)

EU antitrust regulator Margrethe Vestager ordered Broadcom to halt possible anticompetitive practices while an inquiry is underway. The chipmaker has been accused of using exclusivity agreements to block customers from using products made by rivals. (Adam Satariano / The New York Times)

The Justice Department dismantled one of the largest child exploitation sites on the dark web, with the help of the UK and South Korea. The site — run by a South Korean citizen — contained more than 200,000 videos. (Zack Whittaker / TechCrunch)


Ads Inc. raked in millions by using Facebook to find marks for its dubious monthly subscriptions. Customers thought they were signing up for a free trial of a celebrity-endorsed product — then, it all fell apart: Nice investigation from Craig Silverman at BuzzFeed:

Burke’s genius was in fusing the scam with a boiler room–style operation that relied on convincing thousands of average people to rent their personal Facebook accounts to the company, which Ads Inc. then used to place ads for its deceptive free trial offers. That strategy enabled his company to run a huge volume of misleading Facebook ads, targeting consumers all around the world in a lucrative and sophisticated enterprise, a BuzzFeed News investigation has found.

Millions of public Flickr images, many of them of children, landed in a database called MegaFace that is used to train and test artificial intelligence systems. That’s largely legal in the United States — but not in Illinois, which has some of the strictest privacy laws in the country. Now, 6,000 Illinoisans whose images ended up in the database can sue. Kashmir Hill and Aaron Krolik from The New York Times have the story:

As residents of Illinois, they are protected by one of the strictest state privacy laws on the books: the Biometric Information Privacy Act, a 2008 measure that imposes financial penalties for using an Illinoisan’s fingerprints or face scans without consent. Those who used the database — companies including Google, Amazon, Mitsubishi Electric, Tencent and SenseTime — appear to have been unaware of the law, and as a result may have huge financial liability, according to several lawyers and law professors familiar with the legislation.

Google’s Nest is replacing Works with Nest, its former third-party licensing program, with a more tightly controlled version. By restricting access to audited partners, the company hopes to avoid a Cambridge Analytica-like privacy scandal. (Russell Brandom / The Verge)

Instagram is adding a feature to let users control what data they share with third-party apps. When it becomes available on your account, go to Settings > Security > Apps and Websites to see which third-party services have your Instagram credentials. (Dami Lee / The Verge)

Inside Apple’s bumpy journey to break into Hollywood and streaming. A look at some of the company’s missteps as it prepares to launch AppleTV+. (Lesley Goldberg and Natalie Jarvey / Hollywood Insider)

LinkedIn launched a feature called Events to let people plan meetups offline. I often say that LinkedIn is just Facebook in slow motion, and here’s your latest example. (Ingrid Lunden / TechCrunch)

The Information profiles four nascent social networking startups focused on younger users and smaller friend groups. One, a forthcoming app for small friend groups called Cocoon, was started by two Facebook alums. (Alex Heath / The Information)

Machine learning models can’t distinguish between fake and real news, say two new papers from MIT. They do a good job at detecting which stories are written by computers, but not much else. (Joe Uchill / Axios)

And finally...

Talk to us

Send us tips, comments, questions, and counterproductive modifications to Section 230: and