Skip to main content

Tumblr is settling with NYC’s human rights agency over alleged porn ban bias

Tumblr is settling with NYC’s human rights agency over alleged porn ban bias


It’s going to review its moderation algorithm

Share this story

Illustration by Alex Castro / The Verge

Tumblr and New York City’s Commission on Human Rights (CCHR) have settled discrimination allegations related to the company’s 2018 adult content ban, which city regulators say disproportionately affected LGBTQ users. The settlement requires Tumblr to revise its user appeals process and train its human moderators on diversity and inclusion issues, as well as review thousands of old cases and hire an expert to look for potential bias in its moderation algorithms.

The settlement, which did not involve a formal legal complaint and was signed last month, marks one of the first times that regulators have reached an agreement to change a social network’s moderation policies based on algorithmic bias issues. It resolves an investigation that the CCHR began in December 2018, shortly after Tumblr banned explicit sexual content and nudity — and enforced its rules with a comically inaccurate automated takedown system.

“If someone is doing business in New York City, we have the authority to investigate”

In an interview with The Verge, CCHR press secretary Alicia McCauley says the agency became interested after reports that the ban would have an outsized effect on Tumblr’s LGBTQ user base. McCauley notes that New York City’s Human Rights Law provides broad protections against bias based on categories like gender identity and sexual orientation. “If someone is doing business in New York City, we have the authority to investigate if it’s negatively affecting people,” she said.

The settlement gives Tumblr 180 days to hire an expert on sexual orientation and gender identity (SOGI) issues and provide related training to moderators. It must also hire someone with experience in this area as well as expertise in image classification, who will review Tumblr’s moderation algorithms to see if they’re more likely to flag LGBTQ content. As part of an overall review, Tumblr will reexamine 3,000 old cases where a user successfully appealed a takedown, looking for patterns that could indicate bias.

The deal appears to have happened largely because of owner Automattic, which acquired Tumblr from Verizon in 2019 and apparently cooperated closely with the CCHR. “I think that was a turning point in the investigation,” says CCHR attorney Alberto Rodriguez. Automattic had revised the original system to add more human oversight even before the settlement. Under its ownership, Tumblr has also attempted to reconcile with LGBTQ users that departed as part of a larger community exodus.

Rodriguez believes the Tumblr settlement could be an early step in a larger nationwide regulatory movement. “I think it’s inevitable that social media companies are going to come under more government regulation and that more of these enforcement actions are going to come about,” he says.

Social media bias cases have rarely succeeded in court

Bias allegations against social media platforms have rarely succeeded in court, and today’s settlement seems to be bolstered by Automattic’s desire to overhaul Tumblr’s moderation and restore trust with the LGBTQ community. (Automattic is also a very small company with fewer legal resources than “Big Tech” giants.) The CCHR didn’t provide details about the evidence backing up its claims of discrimination, so it’s difficult to evaluate the details of that case. But far larger platforms like YouTube and Instagram have also faced accusations of discriminatory moderation without regulatory action, and YouTube, in particular, has beaten two lawsuits from LGBTQ and Black video creators who alleged algorithmic discrimination.

Rodriguez says that unlike in those cases, the CCHR’s city-level rules don’t require a specific intent to discriminate. But courts have also given social platforms broad latitude to moderate content under the First Amendment and Section 230 of the Communications Decency Act, and a CCHR lawsuit would have to stand up to that scrutiny. “Section 230 applies equally to federal, state, and municipal laws and enforcement,” notes Jeff Kosseff, author of comprehensive Section 230 history The Twenty-Six Words That Created the Internet.

But the larger issue of algorithmic race and gender bias has become an increasing priority for regulators, particularly in cases where it might affect people’s housing and employment options. And even without legal complaints, some companies like Twitter have reviewed their moderation algorithms under public pressure — sometimes making troubling discoveries along the way.

Correction, 1:49PM ET: Automattic owns, not WordPress. We regret the error.