The US Department of Justice has submitted a proposal to weaken Section 230 of the Communications Decency Act, a rule that protects websites and apps from liability for third-party content. The proposal would make it riskier for sites to remove offensive content and would remove sites’ immunity for hosting material related to terrorism, child sex abuse, or cyber-stalking. It would also remove some protections for sites that don’t sufficiently explain their content moderation policies.
The new rules are a concrete application of ideas the Justice Department floated months ago. They cover two orthogonal goals for Section 230 reform: pushing web platforms to more aggressively remove harmful (and sometimes illegal) content like harassment and child sexual abuse material and discouraging them from removing content from conservative and far-right users, including misinformation and hate speech. In a letter to Congress, the Justice Department said it aimed to stop platforms from “censoring lawful speech and promoting certain ideas over others” while also making sure they can’t “escape liability even when they knew their services were being used for criminal activity.”
No protections for “bad Samaritans”
In practice, the new bill would erode protections for letting websites and apps remove content they deemed generally “objectionable.” It would protect sites from lawsuits over their moderation decisions only if they could show an “objectively reasonable belief” that the content was lewd, excessively violent, promoted terrorism and violent extremism, promoted self-harm, or was unlawful. Many of these decisions would still be covered under the First Amendment, but removing Section 230 protections could drag out legal battles over allegations of social media censorship.
Sites would also lose those protections if they don’t state their moderation practices “plainly and with particularity” online, and they would have to offer “timely notice” providing a specific explanation of why someone’s content was removed.
Conversely, if sites are sued for leaving illegal content online, they won’t be protected if they acted as “bad Samaritans” who purposely promoted or solicited illegal material. They also won’t be protected if they ignored notice of criminal activity.
Under Attorney General William Barr, the Justice Department has been holding discussions on Section 230 since early 2020, but it’s also responding to an executive order signed by President Donald Trump in May. There are already several proposals for changing Section 230, although few have gone further than an introduction in Congress. The Justice Department’s offering echoes elements of the EARN IT Act, the PACT Act, and a recent bill from three Republican senators.
Attorney and activist Carrie Goldberg — who has taken web platforms like Grindr to court for enabling harassment — praised the proposal for making it easier to sue platforms that deliberately enable harm or refuse to act on complaints. (On the other hand, the changes could burden small sites with greater legal risks and make all platforms more vulnerable to bad-faith complaints. Sites may also not be able to immediately tell if a piece of content is illegal, a problem that Section 230 helps mitigate.)
But the overall plan drew criticism from advocacy groups like Public Knowledge. “Any positive ideas in the DOJ’s proposal are entirely outweighed by its overall purpose, which is to put obstacles in the way of digital platforms that want to rid their services of misinformation, hate speech, and other forms of objectionable content,” writes legal director John Bergmayer. On Twitter, Bergmayer called the plan’s intent “a form of ‘must carry’ for conservatives and for hate speech.”