Skip to main content

Can we regulate social media without breaking the First Amendment?

Jameel Jaffer, executive director of the Knight First Amendment Institute, helps us answer the question

Share this story

Photo Illustration by Grayson Blackmon / The Verge

One of the hardest problems at the intersection of tech and policy right now is the question of how to regulate social media platforms. Everyone seems to think we should do it — Democrats, Republicans — even Facebook is running ads saying it welcomes regulation. It’s weird. But while everyone might agree on the idea, no one agrees on the execution. Everyone agrees the platforms should be more transparent but not about what — should the algorithms be public? Should researchers have access to data about users? What about data privacy? That seems good, but those bills have been stalled out forever.

And then there’s the biggest obstacle of all: the First Amendment. Like everyone else, social media companies have a First Amendment right to free speech, and a lot of the proposals to regulate these companies kind of look like government speech regulations and run right into the First Amendment.

In fact, this has happened twice in the past year: both Texas and Florida passed state laws that would regulate big platforms, and both of those laws were ruled unconstitutional by courts and put on hold pending appeal. The Florida law was especially hilarious because it specifically excused any company that owned theme parks in the state, which was an openly corrupt concession to Disney. Very good.

Jameel Jaffer, executive director of the Knight First Amendment Institute, just co-authored a piece in The New York Times opinion section arguing that while these laws should be struck down, the arguments being made by the platform companies against them go too far — they stretch the First Amendment into preventing any regulation at all, including things like privacy and transparency. 

I thought that was a fascinating line of argument coming from an organization dedicated to the First Amendment, so I asked Jameel on to Decoder to talk it out.

Here we go.

This transcript has been lightly edited for clarity.

Jameel Jaffer, you’re the executive director of the Knight First Amendment Institute at Columbia University. Welcome to Decoder.

Thank you.

You just wrote a piece in the The New York Times opinion section about the various laws in Texas and Florida that would regulate social media companies and the arguments that the companies are making against those laws. I want to talk about all that, but we should start with the basics. What is the Knight First Amendment Institute?

Well, we are an institute based at Columbia University. We were established five years ago by Columbia and the Knight Foundation to focus on digital-age free speech issues. We do that through litigation, research, and public education. We have brought a number of lawsuits related to social media and free speech. The one that probably got the most attention was the lawsuit that forced President Trump to stop blocking critics from his Twitter account, but that is not the only case that we have brought over the last few years – just the most famous one. We’ve brought a number of other lawsuits relating to the intersection between new technology, especially new communications platforms, and the First Amendment.

Is Knight’s primary product lawsuits and litigation? Is it research? What is the day-to-day?

We are both of those things. We have a litigation team of about a dozen lawyers who work solely on key cases involving free speech and closely related values like privacy, as well as cases involving new technology. We bring those cases ourselves and we litigate them in court. That means we brief them and we argue them at all levels of the federal court system. 

Many of our lawyers come from other public interest organizations. I was at the ACLU for 14 years before I came to Columbia. Some of my other colleagues were at the ACLU, or at [PEN America], or at other organizations that focus on free speech and privacy issues. Again, especially in the context of new technology.

We also have a research program and, through that program, we sponsor academic research and academic symposia around the same set of issues. Public education is something that we have spent less time focusing on over the last few years, but we’re going to spend more time focusing on that this year and in the coming years.

You mentioned the tangled relationship between social media and the First Amendment. If I had to make a prediction for 2022, it would be that this energy and drive to regulate social media in this country will come to a series of moments, maybe to a head, and we will have to contend with whether the First Amendment allows the government to pass speech regulations, to pass content moderation standards or laws preventing racism on platforms, all the things like that. There are two laws in Texas and Florida that the courts have put on hold due to First Amendment challenges. There are endless hearings and Congress loves to yell at tech executives. I don’t know if that’s going anywhere, but it keeps happening. There is a sort of dull roar about Section 230. All that seems like, in the next year or so,  it’s going to add up to, “What are the limits of the First Amendment?” That feels very real to me. 

It’s already happening to some extent, right? Because some of these laws are now being challenged in the lower courts. The Florida law and the Texas law have both been challenged. District court judges have already reached conclusions about those laws and those cases are now going up. I don’t know if those are the cases that are going to lead to the Supreme Court showdown; many things have to happen before that takes place, but if it’s not these laws, there’ll be others. I think Wisconsin also has a law in the works. Other states are considering various kinds of regulation of social media platforms.

Congress is considering this as well – there are all sorts of ideas on the table. You know, what those lawsuits will look like in practice turns on the details of the laws that are actually passed. But there’s no question that over the next year or couple years, the courts are going to have to start grappling with those questions that you already alluded to. Those questions about what does the First Amendment actually mean in this context?

Your piece in The New York Times opinion section covered the arguments that social media platforms like Facebook and Twitter, through their various lobbying groups, are making in the courts. Your argument is that they are somehow co-opting the First Amendment in their lawsuits. Explain that to me.

The op-ed piece was based on a brief that we filed in the Florida case. We found ourselves a little bit conflicted about these lawsuits because on the one hand, we agree with the companies that these laws are unconstitutional. I mean, these laws impose extensive transparency requirements on major platforms. They give users all kinds of rights with respect to content that’s taken down or accounts that are taken down. The Florida law restricts platforms from deplatforming or shadow banning – which is not very well defined in the law – political candidates or media organizations. The Texas law prohibits viewpoint discrimination. 

So there really are very comprehensive regulations to the social media platform and they’re extremely burdensome. But more significantly, as it relates to the First Amendment, both of the laws are viewpoint discriminatory and they are viewpoint discriminatory in the sense that they were enacted in retaliation for the platforms’ editorial decisions. In the months preceding the enactment of these laws, some of the platforms took down President Trump’s accounts. They restricted access to reporting about Hunter Biden. They labeled vaccine misinformation as misinformation. 

These laws were kind of payback for those decisions. And what we said in our brief is the fact that the laws were payback for those decisions is enough to doom them for constitutional purposes. That alone should be sufficient to justify the courts throwing these laws out. But the arguments that the platforms, the social media companies, are making actually go much further than that.

The platforms are not just saying that these laws are viewpoint discriminatory. They go on to make a number of arguments that, if you accept them, would preempt more than just the Florida and Texas laws, which have all kinds of problems beyond viewpoint discrimination. They would also preempt other laws that wouldn’t have those problems. So just to be a little more specific, the companies argue, for example, that the First Amendment entitles them to exactly the same protections that newspapers get under the First Amendment.

They also argue that any law that burdens their editorial decision-making, no matter how minimal the burden, should be subject to the most stringent constitutional review and maybe even regarded as per se unconstitutional. And they argue that any law that targets larger platforms, or that draws a line between larger and smaller platforms, should be subject to the most serious constitutional scrutiny as well.

And if you accept all of those arguments, yes, absolutely, the Florida and Texas laws will be struck down, as they should be. But it will also be almost impossible for legislatures, at the state or federal level, to pass much more modest laws. Laws that, for example, impose reasonable transparency requirements on the companies, or afford users reasonable due process protections. Or even restrict what kind of information they can collect and how they can use that information. In other words, privacy laws. So if you accept the social media company’s arguments. It’s not just the Texas and Florida laws that will be struck down, it’s all these future laws too. Laws that might be much more reasonable than the ones we’re looking at right now.

Do you think the audience for your piece was regular people reading it? Was it the judges who read The New York Times opinion section? Was it the bar as a whole?

I think part of why we wrote it was that even amongst the community that spends a lot of time thinking about the First Amendment and regulation of the platforms, there has been a kind of bifurcation in that community. People are gravitating towards these two poles that are represented in these cases by the platforms on one hand and by the state governments on the other. So you have the platforms arguing essentially a kind of “all” position; we have the same rights as newspapers and any law that would be unconstitutional with respect to newspapers must also be unconstitutional with respect to social media companies. That’s a very broad understanding of the platform’s own First Amendment rights. And on the other hand, you have these state governments which have staked out a kind of “none” position; the platforms have no First Amendment rights to speak of in this context and whatever state governments want to do, they should be allowed to do.

And it seemed to me and to my colleagues of the Knight Institute that this kind of bifurcation, or polarization, of this particular debate is a real problem because the First Amendment doesn’t leave us with only these two possibilities, all or nothing. There’s a lot of space in between all or nothing. You could have a set of rules that restricted state governments from effectively using social media regulation as a means of distorting political debate, but still allow legislatures to impose reasonable privacy and transparency and due process protections. There is that kind of middle ground, and the First Amendment shouldn’t be understood to leave us with only these two extremely unappealing options.

And so that message, I think, was in part for people in our own small community of lawyers and tech policy experts who are thinking about these issues every day but, in our view, have come to the wrong conclusion that there are only these two unappealing possibilities on the table. I don’t want to be too generous to our own writing, but it was also an effort to bring a degree of nuance to a conversation that has sometimes maybe lacked nuance.

But let me push you on nuance specifically here. I was a lawyer for 20 minutes. I wasn’t any good at it. But my instinct is: of course the social media companies are arguing for a maximal interpretation of the First Amendment. That’s what their lawyers are paid to do. I just read the piece and it felt strange to me to say they were doing anything wrong. This is the argument because that’s what lawyers are meant to do.

I guess that’s a fair point, although it’s notable that some of these companies, Facebook in particular, but not only Facebook, have been out there saying to Congress that they wanted regulation. “We want regulation.” 

“Please regulate us.”

And at the same time, they’re going into these district courts around the country saying any regulation would be unconstitutional. So that seems like an important thing to note. And even if it’s not realistic to think that the companies are going to change their legal arguments, even if any reasonable person would expect financially motivated actors to make these kinds of arguments, it seems important to me to ensure that other people aren’t fooled when the platforms wrap themselves in the First Amendment.

We shouldn’t think that to support the First Amendment here necessarily means to support the platforms. The First Amendment might be something quite different from what the platforms are saying it is and might be something quite different from what the state governments are saying it is. There is this in-between space and we just want to make clear that being a champion of the First Amendment in this context doesn’t necessarily mean lining up behind the social media companies.

So that’s a message not so much for the companies, but for others who are just trying to figure out what their views are on this topic.

I think that’s what I mean when I say it feels like things are going to come to a head in the next year or so. I would ordinarily assume that any First Amendment organization would similarly push for a maximal interpretation of the First Amendment in cases like this. And you’re saying, no, there’s a lot of nuance here.

First of all, I think that that phrase, “a maximalist understanding of the First Amendment,” is not a good way of understanding what’s going on here, because there are actually a lot of different free speech actors in the mix here. The platforms are asserting free speech rights. They say, “We are building an expressive community, we should get to decide what our expressive community looks like.” But the platform’s users are also asserting free speech rights. They’re saying, “This is the new public square and we have the right to participate in the new public square and we should be permitted to participate in that kind of conversation without interference on the basis of, for example, a viewpoint.” And then governments are also asserting a kind of free speech interest here when they say, “We need to protect the integrity of this public square. We need to make sure that this public square works for our democracy. We need to harness public discourse for democratic ends.”

And all of those are free speech arguments. And when you decide on what shape the First Amendment should take in this context, you have to take all of those interests into account, not just the company’s interests. So I don’t think it’s really a question of a maximalist understanding of the First Amendment versus a minimalist understanding of the First Amendment. It’s really a question of “What was the First Amendment really meant to protect?” What are the values that the First Amendment was meant to protect and what shape do we need to give the First Amendment to ensure that it serves those values?

“In my view, the First Amendment was meant in large part to protect the process of self-government.”

In my view, the First Amendment was meant in large part to protect the process of self-government. That means that it should accommodate regulations that are intended – and do actually – protect or strengthen the process of self-government, but it shouldn’t accommodate regulations that interfere with that process. So when we’re talking about regulations that are effectively state efforts to enlist the platforms in certain kinds of censorship, the First Amendment shouldn’t make room for those kinds of regulations. But it should make room for regulations that help all of us understand better what forces are shaping public discourse; for example, how the platforms are shaping public discourse through their editorial decisions.

That seems like something that the First Amendment should be not unsympathetic to, so long as the regulations are narrowly tailored and drafted in a way that doesn’t effectively give government actors the ability to rig the game. Now, whether any particular regulation should survive the kind of First Amendment review I just described, that’s going to be a hard question, and it’s not going to be a matter of just applying rules that we developed 50 years ago before anyone had even conceived of the internet to this extremely different context. It’s going to be going back to the values that animate the First Amendment and asking, again, what kinds of rules we need in place to give effect to those values.

The key line that really jumped out to me from your op-ed was, “The First Amendment should apply differently to social media companies than it does to newspapers because social media companies and newspapers exercise editorial judgment in different ways.” To me, this says there’s a test and if you’re doing X kind of editorial judgment, you get newspaper protection, and if you’re doing Y kind of editorial judgment, you get a lesser social media company protection. What’s the line?

I actually regret that particular phrasing. What I meant to say, which is just a little bit different from what we did in fact say, was that it should matter how editorial discretion is exercised. It doesn’t matter whether it’s a platform exercising it or a newspaper exercising it, but it should matter how editorial discretion is exercised.

And here’s a very sort of concrete way of thinking about this. When my co-author Scott Wilkens and I submitted this op-ed to The New York Times, we traded drafts with the Times three or four times. They wrote back to us and said, “Why are you using this word here? Why are you using this phrase here?” They had suggestions about the structure of the argument. They had suggestions about specific wording. They had questions about all of our claims. We went back and forth again several times, then they sent it to copy editors who also had comments on what we had written. Then they selected a title, and then they selected the placement of the op-ed on their website and they made a decision about whether to have it in the newspaper as well.

And when they decided to put it in the newspaper, they decided where to put it in the newspaper, and they decided whether to attach a photograph to it. And all of those decisions were made by editors who were talking to one another about the kinds of things that we think about when we think about editorial judgment. 

Platforms don’t do any of that. That doesn’t mean platforms aren’t engaged in editorial judgment as well. As we’ve been saying for the past 20 minutes: platforms do engage in editorial judgment, but they exercise it in a very different way. They exercise it through content moderation decisions, through community standards. They implement it both through human decision-makers and also through algorithmic ones.

Can I just interrupt you?


This sounds like you’re saying there is a difference between a platform and a publisher, which are loaded phrases in the Section 230 debate, but you can see how this mirrors that kind of argument, right?

No. I’m not sure it does. Let me just tell you what the argument is, and then you can decide whether it mirrors it or not. The argument is just that these two kinds of editorial decision-making look different, and, because they look different, it’s possible that any particular regulation that would burden the editorial judgment of The New York Times might not burden the editorial judgment of Twitter, or it might not burden it to the same extent.

That seems important to me because otherwise, if the question we’re going to ask every time Congress tries to regulate the platforms is, could they do this to The New York Times? We’re going to end up with no regulation at all — no due process regulation, no transparency regulation. Because, of course, Congress can’t say to The New York Times, “Explain why you rejected this op-ed” or, “If you reject the op-ed, then you need to give the person who submitted it an opportunity to appeal the decision to the editor-in-chief.”It would be ridiculous for Congress to even propose something like that, but it’s not obvious at all to me that we should react the same way to those kinds of regulations when they’re proposed with respect to the platforms. I’m not the only one who thinks that; some of the platforms, too, have not just invited that kind of regulation, but have also already started to provide some of that transparency themselves.

If it were so offensive to the idea of editorial discretion that platforms should be required to provide that kind of transparency, it’s a little bit weird that they’re providing a degree of that kind of transparency already. So, the only argument we were trying to make is that these two kinds of entities exercise editorial discretion in different ways, and that might matter to the constitutionality of any particular regulation. It seems to me that should be noncontroversial, and that it has to be true. It cannot be the case that the First Amendment is indifferent to the nature of the editorial decision-making. It just wouldn’t make any sense. If that were the case, it wouldn’t make sense that the platforms are transparent in ways that newspapers never are. That seems like an important fact to me.

What I’d push you on, though, is that your argument says there’s a set of actors that look like open access social media platforms and there’s a set of actors that look like The New York Times, The Verge, or Wired magazine, and they’re different. What do you think are the differences?

They’re doing different things. Yeah.

What is the line? How would you define that, such that anybody could understand? We have a comment section, The New York Times has a comment section. That looks more like Facebook than not.

“If The New York Times has a comment section, as it does, The New York Times looks a little bit more like a social media company.”

Absolutely. Yes. That’s why I don’t think the line should be between newspapers and social media companies. The line should be drawn on the basis of the kind of editorial judgment that is being exercised in that particular context. If The New York Times has a comment section, as it does, The New York Times looks a little bit more like a social media company. In that particular context, what the Times is doing is something much more akin to what Facebook does in its main business.

So I wouldn’t draw the line between newspapers and social media companies. I would draw it on the basis of the function. What function is the entity engaged in, in that particular context? Now, you might still ask the same question: how are you going to draw the lines based on function? I think the only way to answer that question is on a case-by-case basis. That’s what courts do. They draw those kinds of lines on the basis of case-by-case decision-making.

This is not a novel proposition, even on this question of which entities are exercising editorial judgment. There’s a long line of Supreme Court cases in which the court has given content to that phrase “editorial judgment,” through case-by-case decision making. There was a 1974 case, Miami Herald Publishing Company vs. Tornillo, in which the court said that newspaper was exercising editorial judgment.

Then there was a case [12 years] later, [Pacific Gas & Electric Company v. Public Utilities Commission of California], in which the Court said that a utility was exercising editorial judgment when it decided whether or not to include certain content in the envelopes that it sent to its subscribers.

Then, a decade later, the Court said [in Hurley v. Irish-American Gay, Lesbian Bisexual Group] that a parade organizer was exercising editorial judgment in deciding who can be part of the parade or not. That’s case-by-case decision-making to figure out who is exercising editorial judgment, and you could have the same kind of case-by-case decision-making with respect to what kinds of burdens on editorial judgment are constitutionally permissible. 

The only argument I’m making here is that the kinds of burdens that are constitutionally permissible might turn on whether you’re regulating a parade, or a utility, or a newspaper engaged in traditional newspaper activities, or a social media company engaged in traditional social media activities. That seems to me that it’s got to be right.

I feel you on that. I’m looking at the attempts at definitions, across the whole policy landscape. And those attempts at definitions keep crashing into the rocks, right?


The Federal Trade Commission tries to sue Facebook for being a monopoly social media provider. They cannot even define the market that Facebook operates in, so their lawsuit failed. They failed on the first cut of defining the market for Facebook’s services. I’m looking at the Florida and Texas laws. Florida’s famously has an extremely corrupt definition, excluding people who own a theme park, so that Disney would not be hit by this law.

Even on the first cut — what is a social media company? — the government seems to be flailing.

That’s true, but you do not have to be a particularly sophisticated First Amendment expert to understand that the Disney exception was not going to fly, right?


To some extent, the bad definitions are a reflection of bad motivations, but there are more serious efforts, especially at the federal level. There is a draft bill that Sens. Coons, Portman, and Klobuchar issued last week that deals almost entirely with transparency issues, and it would require the platforms to share certain kinds of data with researchers. It would provide that kind of safe harbor for journalists and researchers who study the platforms. In my view, it’s very, very carefully done. I’m not sure that it couldn’t be improved. I’m sure that there are tweaks that could be made that would make it stronger, both as a matter of effectiveness and as a matter of standing up to a First Amendment challenge. To me that seems like a very serious effort at drafting regulation that would strengthen democratic values online, and shouldn’t be understood as an affront to the First Amendment.

It may be that the first wave of these laws, like the Florida and Texas laws, just don’t go anywhere, for good reasons, but I don’t think we should take that to mean that no regulation is possible here, or no regulation is likely to survive First Amendment scrutiny.

There’s another view in the tech industry that I actually hear quite a bit. It’s surprising to me, especially because, as you say, the platform’s lawyers are in court arguing for these very expansive First Amendment interpretations. However, the other view is, “Boy, the First Amendment is annoying. I wish the United States would be more like Germany, and just write a speech code that says Nazis are illegal, and here’s some speech that’s illegal, and write the content moderation standards for us. Because us trying to do it is an endless pain point. And I, Mark Zuckerberg, am tired of it. I’m going to rename my company to Meta, and do metaverse stuff instead of thinking about speech regulation at scale, which is an impossible problem.”

It’s a more common view than I ever expected. I think that’s a horrible answer, but I hear it a lot. Do you think there’s a way for the United States to actually do something like that for this set of companies, if you can define them?

Do something like that, meaning impose that kind of speech code?


No. I don’t see the First Amendment as an obstacle to good ideas in this space, but it is an obstacle to some bad ideas. I would put that one on my list of bad ideas. Anybody who actually finds this proposal appealing should just think about how that power would’ve been used had the last administration had it. If President Trump had the power to define, for example, vaccine misinformation, how would President Trump have defined that phrase? What would the speech code have looked like for the platforms?

I suspect that the people who think that the imposition of a speech code would solve our problems here maybe haven’t thought very deeply about what that speech code would actually look like, and who would get to write it, and who would get to enforce it. I think that, on this particular point, we’re very lucky to have the First Amendment because it protects us from exactly that so-called solution to the problem. 

Obviously the companies disagree with me on this point, but I don’t think the First Amendment is an obstacle to the kinds of regulations that actually make sense, that would actually do something to address real problems. I keep going back to transparency, privacy, and due process, but there are others as well, like interoperability, for example. If Congress wanted to do something on interoperability that gave developers the right to build on top of the digital infrastructure that the big technology companies have created, I don’t think the First Amendment would be an obstacle to that, and that might actually do a lot to address issues relating to monopoly power. Similarly, in privacy law, creating privacy protections that limited what the companies can collect and how they can use that information would have a direct impact on privacy, but also on the quality of our speech environment because it’s all that data that feeds micro-targeting or nano-targeting of messages that often contain misinformation.

So, when I think through the kinds of non-viewpoint discriminatory proposals that we have now mentioned several times — Congress could actually improve our speech environment in very significant ways without generating serious First Amendment issues.

I want to end on a big idea. You started by saying these are the new public squares. The users of these platforms have an interest too. I think most people in America are more often touched by YouTube’s content moderation policies than any state or federal law. People are more aware of YouTube copyright strikes than maybe even the speed limit around them, right? The platforms and their rules are in your face all the time as you use the internet. Supreme Court Justice Clarence Thomas wrote in a concurrence that we should just call social media companies common carriers.


We should use this other language from telecom law and just leave the First Amendment entirely and start regulating these companies as utilities, like the phone company. We’re also talking in the context of a Supreme Court that seems poised to flip 50 years of Roe v. Wade precedent. This court seems more willing to leave some precedent behind. A lot of the First Amendment precedent that we’re talking about is really only 70 years old. The notion of strict scrutiny is 70 years old. 


So it just seems like there’s also room for another method here of thinking about how to regulate these platforms that almost has nothing to do with the First Amendment — escapes it entirely — but might lead to other significant kinds of consequences. Do you see that as a danger? Do you see that as an opportunity? How do you think about that?

It doesn’t seem obvious to me that this would avoid First Amendment issues. I think it would just generate its own set of First Amendment challenges because the platforms would argue that they are not, in fact, common carriers. They have never held themselves out to the entire public in the way that common carriers are usually thought to do. To the contrary, the platforms all have community standards or something akin to community standards, and they all have content moderation policies, and that makes them look very different from something like AT&T, which really is open to all comers. Now, there’s a certain kind of historical circularity to that argument, but the fact remains that these platforms do, in fact, exercise a great deal of editorial discretion. In fact, that’s what a lot of people are complaining about: that they’re exercising editorial discretion. But, the fact that they’re exercising editorial discretion makes them look different from railways and the telecoms.

I’m not sure how far that argument goes as a matter of legal doctrine or common sense, but you’re right that Justice Thomas expressed some enthusiasm for it in that concurrence, in the Trump Twitter case, and appeals court judges have also expressed some enthusiasm for it. Florida and Texas are both, to some extent, relying on the common carrier argument. Eugene Volokh — who is a legal scholar who’s sometimes associated with libertarianism — has this argument that, to the extent the social media platforms are just hosting, they can be characterized as common carriers. It’s not obvious to me that that distinction is actually a workable one. Can you actually draw the line between hosting and everything else that the platforms do? That seems complicated to me, but maybe conceptually there’s some value in distinguishing these two functions. I don’t know, but I don’t think that just asserting that the platforms are common carriers is going to get us out of the First Amendment world. To the contrary, it’s just going to raise a whole other set of First Amendment questions.

As somebody who has covered net neutrality for 10 years, I have to say, just the phrase “common carrier” lights my brain on fire. Where do you think this goes next? What should people be looking for over the course of the next year?

There are going to be these two appeals court arguments and decisions over the next few months. The Florida case is first. The 11th Circuit is going to hear this case early next year, and then the Fifth Circuit will hear the Texas case, if not in the spring, then I think early in the summer. It’s entirely conceivable that one of those cases goes up to the Supreme Court, but even if they don’t, I think those decisions will end up shaping the legislative debate both at the state and federal level over the next year or two, because those courts are going to start to sketch out what the First Amendment means in this particular context, and legislatures will then have to take into account the limits those courts have put in place. 

That’s why we filed the brief we did in the Florida case. We really want to make sure that the courts understand the implications of accepting the arguments that the companies are making here, and that if you accept those arguments, you’re not just ruling out the Florida and Texas laws. You’re ruling out any other conceivable legislation that legislatures might come up with in the future.

Great. Well, Jameel, this has been really illuminating. Thank you for coming on Decoder.

Thank you. Happy to do it.

Decoder with Nilay Patel /

A podcast from The Verge about big ideas and other problems.

Subscribe now!