This morning, Facebook co-founder Chris Hughes made a landmark call to break up Facebook in The New York Times. Hughes — who left the company in 2007 — argues that Facebook has fostered its users’ bad impulses, prevented other companies from competing, and gained “unilateral control over speech” worldwide. The piece is a blunt condemnation of Facebook’s market power and frightening grip on modern society, and it’s a compelling one.
But on one topic, his proposed solution is sketchy, confusing, and even potentially counterproductive to his goal of reducing Facebook’s power. It’s also frustratingly common.
Hughes is primarily calling on the US regulators to split up Facebook, WhatsApp, and Instagram. Toward the end of the piece, though, he also suggests a new agency that would regulate tech companies. (So far, so good.) Then, he suggests that this agency establish “guidelines for acceptable speech on social media.”
Finally, the agency should create guidelines for acceptable speech on social media. This idea may seem un-American — we would never stand for a government agency censoring speech. But we already have limits on yelling “fire” in a crowded theater, child pornography, speech intended to provoke violence and false statements to manipulate stock prices. We will have to create similar standards that tech companies can use. These standards should of course be subject to the review of the courts, just as any other limits on speech are. But there is no constitutional right to harass others or live-stream violence.
I’m not entirely sure what this paragraph means.
The First Amendment limits the US government’s power to ban or criminalize certain kinds of speech. There’s a broad debate over how it applies to social media platforms, but we’re dealing with one specific issue here: keeping truly harmful categories of speech off platforms like Facebook. The thing is, we’ve already got “guidelines for acceptable speech” on the internet — and they’re the same ones Hughes discusses one sentence later. (Although “you can’t yell fire in a crowded theater” isn’t actually a concrete legal doctrine.)
The internet — like any new communications technology — can introduce new questions about free speech, but saying something online doesn’t erase the existing laws in the books. Courts have jailed people for cyberstalking or threatening others through Facebook, and websites or individual internet users can already be sued for defamation.
The First Amendment already limits online speech
In light of that, I’d guess Hughes is saying that the agency should create guidelines based on those existing limits, then censure companies that don’t remove offending content from their services. If these are just general best practices, they probably won’t change how platforms operate — in fact, services like Facebook and Twitter already ban a lot of speech that’s legal under the First Amendment.
But to create a real policy with any bite, the agency would have to deal with Section 230 of the Communications Decency Act. Section 230 shields “interactive computer service” owners from liability over what other people post — so you can sue someone for writing a defamatory Facebook comment, but you can’t sue Facebook for hosting that comment.
I’m intentionally avoiding the term “web platforms” here — because at this point, it either confuses people about what the law means, or lets them get away with lying. We’ve mentioned this before, but under Section 230, it fundamentally does not matter whether you call a website a “platform” or a “publisher.” If a piece of content is “provided by another information content provider,” not created by the site’s operator, it’s protected (with some exceptions). Social media companies can’t be sued when somebody leaves a vile comment on their website. Neither can newspapers. As many internet freedom advocates have discussed, repealing Section 230 would pose serious problems to vast numbers of websites, and it wouldn’t necessarily keep bad content offline.
Content moderation at scale is hard
The mechanics of moderation are also a problem for Hughes’ plan. Facebook certainly doesn’t want mass shooting videos on its site right now, but content moderation at scale is inherently hard, even for companies smaller than Facebook — and adding more legal liability won’t change that basic problem. Shifting policy decisions to the government wouldn’t help either, even if we trust Hughes’ new agency more than Zuckerberg. And remember, Facebook is one of the huge, wealthy companies that’s best equipped to perform this kind of moderation. Smaller social networks — the kind that Hughes hopes will flourish if we break up Facebook — would face all the same liabilities with far fewer resources.
Is Facebook still liable for encrypted posts?
Let’s say Hughes is referring to something more nuanced — like a law that would narrowly define “social media” services and apply only to sites above a certain user or revenue base. Senator Elizabeth Warren (D-MA) has proposed something similar for antitrust law, designating giant companies as “platform utilities.” That would address some of the problems raised by repealing Section 230. But once again, making a rule that’s actually effective would require addressing some hard problems.
Facebook, for instance, recently revealed that it would begin encrypting more content on its platform. The decision offers users more privacy, but as Hughes notes, that also means Facebook can’t see or moderate the content. Facebook already faces problems with literally deadly misinformation on its encrypted service WhatsApp, and at least one government has proposed disabling encryption to help stop fake news. If Facebook encrypts more services, and someone streams a hate crime — or some other awful content — to a private group, would this hypothetical agency follow suit? And if so, how would that affect broader encryption laws?
We can enforce existing laws without a new agency
In the most charitable and coherent formulation, Hughes is simply asking for the police to enforce existing laws online. Police departments don’t always understand web platforms well, and the internet’s scale makes it easy to spread harmful information and hard to hold people accountable. But in the US, we don’t need new agencies and speech guidelines to get better at investigating violent hate groups or arresting people who make threats. The criminal justice system has huge problems, but they won’t be solved by deputizing private companies — especially because a lot of internet harassment and extremism takes place on independent sites or through private channels like email.
The internet is an ugly place, and discussing how to make it less ugly is a legitimate and urgent task. But simply calling to “create guidelines” for turning websites into content cops — while implying that existing laws somehow don’t apply to the internet already — is a piece of glib handwaving that’s not worthy of Hughes’ broader manifesto.