Skip to main content

How Twitter’s child porn problem ruined its plans for an OnlyFans competitor

Internal documents and Twitter employees reveal the need for massive investment to remove illegal content — but executives haven’t listened

Kristen Radtke / The Verge; Getty Images

In the spring of 2022, Twitter considered making a radical change to the platform. After years of quietly allowing adult content on the service, the company would monetize it. The proposal: give adult content creators the ability to begin selling OnlyFans-style paid subscriptions, with Twitter keeping a share of the revenue.

Had the project been approved, Twitter would have risked a massive backlash from advertisers, who generate the vast majority of the company’s revenues. But the service could have generated more than enough to compensate for losses. OnlyFans, the most popular by far of the adult creator sites, is projecting $2.5 billion in revenue this yearabout half of Twitter’s 2021 revenue — and is already a profitable company. 

Some executives thought Twitter could easily begin capturing a share of that money since the service is already the primary marketing channel for most OnlyFans creators. And so resources were pushed to a new project called ACM: Adult Content Monetization.

Before the final go-ahead to launch, though, Twitter convened 84 employees to form what it called a “Red Team.” The goal was “to pressure-test the decision to allow adult creators to monetize on the platform, by specifically focusing on what it would look like for Twitter to do this safely and responsibly,” according to documents obtained by The Verge and interviews with current and former Twitter employees.

Executives are apparently well-informed about the issue, and the company is doing little to fix it

What the Red Team discovered derailed the project: Twitter could not safely allow adult creators to sell subscriptions because the company was not — and still is not — effectively policing harmful sexual content on the platform.

“Twitter cannot accurately detect child sexual exploitation and non-consensual nudity at scale,” the Red Team concluded in April 2022. The company also lacked tools to verify that creators and consumers of adult content were of legal age, the team found. As a result, in May — weeks after Elon Musk agreed to purchase the company for $44 billion — the company delayed the project indefinitely. If Twitter couldn’t consistently remove child sexual exploitative content on the platform today, how would it even begin to monetize porn? 

Launching ACM would worsen the problem, the team found. Allowing creators to begin putting their content behind a paywall would mean that even more illegal material would make its way to Twitter — and more of it would slip out of view. Twitter had few effective tools available to find it.

Taking the Red Team report seriously, leadership decided it would not launch Adult Content Monetization until Twitter put more health and safety measures in place.

Twitter has not committed sufficient resources to detect, remove, and prevent harmful content from the platform

The Red Team report “was part of a discussion, which ultimately led us to pause the workstream for the right reasons,” said Twitter spokeswoman Katie Rosborough.

But that did little to change the problem at hand — one that employees from across the company have been warning about for over a year. According to interviews with current and former staffers, as well as 58 pages of internal documents obtained by The Verge, Twitter still has a problem with content that sexually exploits children. Executives are apparently well-informed about the issue, and the company is doing little to fix it.

“Twitter has zero tolerance for child sexual exploitation,” Twitter’s Rosborough said. “We aggressively fight online child sexual abuse and have invested significantly in technology and tools to enforce our policy. Our dedicated teams work to stay ahead of bad-faith actors and to help ensure we’re protecting minors from harm — both on and offline.” 


While the Red Team’s work succeeded in delaying the Adult Content Monetization project, nothing the team discovered should have come as a surprise to Twitter’s executives. Fifteen months earlier, researchers working on the team tasked with making Twitter more civil and safe sounded the alarm about the weak state of Twitter’s tools for detecting child sexual exploitation (CSE) and implored executives to add more resources to fix it.

The system that Twitter heavily relied on to discover CSE had begun to break

“While the amount of CSE online has grown exponentially, Twitter’s investment in technologies to detect and manage the growth has not,” begins a February 2021 report from the company’s Health team. “Teams are managing the workload using legacy tools with known broken windows. In short (and outlined at length below), [content moderators] are keeping the ship afloat with limited-to-no-support from Health.”

Employees we spoke to reiterated that despite executives knowing about the company’s CSE problems, Twitter has not committed sufficient resources to detect, remove, and prevent harmful content from the platform.

Part of the problem is scale. Every platform struggles to manage the illegal materials users upload to the site, and in that regard, Twitter is no different. The platform, a critical medium for global communication with 229 million daily users, has the content moderation challenges that come with operating any large space on the internet and the added struggle of outsized scrutiny from politicians and the media. 

But unlike larger peers, including Google and Facebook, Twitter has suffered from a history of mismanagement and a generally weak business that has failed to turn a profit for eight of the past 10 years. As a result, the company has invested far less in content moderation and user safety than its rivals. In 2019, Mark Zuckerberg boasted that the amount Facebook spends on safety features exceeds Twitter’s entire annual revenue.

Meanwhile, the system that Twitter heavily relied on to discover CSE had begun to break.

For years, tech platforms have collaborated to find known CSE material by matching images against a widely deployed database called PhotoDNA. Microsoft created the service in 2009, and though it is accurate in identifying CSE, PhotoDNA can only flag known images. By law, platforms that search for CSE are required to report what they find to the National Center for Missing and Exploited Children (NCMEC), a government-funded nonprofit that tracks the problem and shares information with law enforcement. An NCMEC analysis cited by Twitter’s working group found that of the 1 million reports submitted each month, 84 percent contain newly-discovered CSE — none of which would be flagged by PhotoDNA. In practice, this means Twitter is likely failing to detect a significant amount of illegal content on the platform.

Twitter failed to remove the videos, “allowing them to be viewed by hundreds of thousands of the platform’s users”

The 2021 report found that the processes Twitter uses to identify and remove CSE are woefully inadequate — largely manual at a time when larger companies have increasingly turned to automated systems that can catch material that isn’t flagged by PhotoDNA. Twitter’s primary enforcement software is “a legacy, unsupported tool” called RedPanda, according to the report. “RedPanda is by far one of the most fragile, inefficient, and under-supported tools we have on offer,” one engineer quoted in the report said.

Twitter devised a manual system to submit reports to NCMEC. But the February report found that because it is so labor-intensive, this created a backlog of cases to review, delaying many instances of CSE from being reported to law enforcement.

The machine learning tools Twitter does have are mostly unable to identify new instances of CSE in tweets or live video, the report found. Until February 2022, there was no way for users to flag content as anything more specific than “sensitive media” — a broad category that meant some of the worst material on the platform often wasn’t prioritized for moderation. In one case, an illegal video was viewable on the platform for more than 23 hours, even after it had been widely reported as abusive.

“These gaps also put Twitter at legal and reputation risk,” Twitter’s working group wrote in its report.

Rosborough said that since February 2021, the company has increased its investment in CSE detection significantly. She noted that it currently has four open positions for child safety roles at a time when Twitter has slowed down its pace of hiring.

Earlier this year, NCMEC accused Twitter of leaving up videos containing “obvious” and “graphic” child sexual abuse material in an amicus brief submitted to the ninth circuit in John Doe #1 et al. v. Twitter. “The children informed the company that they were minors, that they had been ‘baited, harassed, and threatened’ into making the videos, that they were victims of ‘sex abuse’ under investigation by law enforcement,” the brief read. Yet, Twitter failed to remove the videos, “allowing them to be viewed by hundreds of thousands of the platform’s users.”

This echoed a concern of Twitter’s own employees, who wrote in a February report that the company, along with other tech platforms, has “accelerated the pace of CSE content creation and distribution to a breaking point where manual detection, review, and investigations no longer scale” by allowing adult content and failing to invest in systems that could effectively monitor it. 

The years-long struggle to address CSE ran into a competing priority at Twitter: greatly increasing its user and revenue numbers

To address the issue, the working group called on Twitter executives to work on a series of projects. The group recommended that the company finally build a single tool to process CSE reports, collect and analyze related data, and submit reports to NCMEC. It should create unique fingerprints (called hashes) of the CSE it finds and share those fingerprints with other tech platforms. And it should build features to protect the mental health of content moderators, most of whom work for third-party vendors, by blurring the faces of abuse victims or de-saturating the images.

But even in 2021, before the company’s tumultuous acquisition by Musk began, the working group acknowledged that mustering the necessary resources would be a challenge.

“The task of ‘fixing’ CSE tooling is daunting,” they wrote. “[The Health team]’s strategy should be to chip away at these needs over time starting with the highest priority features to avoid the too-big-to-prioritize trap.”

The project may have been too big to prioritize after all. Aside from enabling in-app reporting of CSE, there appears to have been little progress on the group’s other recommendations. One of the research teams that had been most vocal about fixing Twitter’s CSE detection systems has been disbanded. (Twitter’s Rosborough says the team has been “refocused to reflect its core purpose of child safety” and has had dedicated engineers added to it.) Employees say that Twitter’s executives know about the problem, but the company has repeatedly failed to act. 


The years-long struggle to address CSE ran into a competing priority at Twitter: greatly increasing its user and revenue numbers. In 2020, the activist investor Elliott Management took a large position in Twitter in an effort to force out then-CEO Jack Dorsey. He survived the attempt, but to remain as CEO, Dorsey made three hard-to-keep promises: that Twitter would increase its user base by 100 million people, speed up revenue growth, and gain market share in digital advertising.

Dorsey quit as CEO in November 2021, having made little progress toward reaching those milestones. It was left to his hand-picked successor, former chief technology officer Parag Agrawal, to fulfill Elliott’s demands.

Under its former head of product, Kayvon Beykpour, Twitter had spent the past few years adding products for creators. Last summer, it began rolling out “ticketed Spaces,” letting users charge for access to its Clubhouse-like live audio product. The company added “Super Follows,” a way for users to offer subscriptions for non-sexually explicit content, last September. In both cases, the company takes a percentage of the user’s revenue, allowing the company to make money outside its core ad business.

“Adult content was a huge differentiator for Twitter, and for those [working] on revenue, it was an untapped resource.”

While all of that unfolded, Twitter had become a major destination for another type of content: porn. In the nearly four years since Tumblr banned adult content, Twitter had become one of the only mainstream sites that allows users to upload sexually explicit photos and videos. It also attracted a significant number of performers who use Twitter to market and grow their businesses, using photos and short video clips as advertisements for paywalled services like OnlyFans.

“Adult content was a huge differentiator for Twitter, and for those [working] on revenue, it was an untapped resource,” a former employee says.

Twitter is so important to the porn world that fears the company will eventually cave to external pressures and shut it down have regularly occasionally roiled the world of adult creators. In fact, though, by this spring, the company was considering a move that would make porn even more important to the platform — by placing it at the center of a new revenue plan. 

Twitter already had Super Follows for non-explicit content, the thinking went. Why not add the feature for creators of adult content, too? The timing felt right, especially after OnlyFans alienated users by saying last year that it would ban adult content, only to reverse its stance a few days later

Executives rarely discuss its popularity as a destination for adult content. (One document obtained by The Verge suggests the company has a strategy “to minimize focus and press” related to the subject.) But over the past two years, the company got very serious about adult content and began actively exploring an OnlyFans-like service for its users.

By this spring, the company was nearing a final decision. On April 21st and 22nd, Twitter convened another Red Team, this time for a project called Adult Creator Monetization, or ACM.

Twitter would have several strengths if it decided to compete with OnlyFans, the Red Team found. Adult creators have a generally favorable attitude toward the company, thanks to how easy Twitter makes it for them to distribute their content. The project was also “consistent with Twitter’s principles in free speech and freedom of expression,” they said. Finally, the company was planning to obtain a money transmitter license so it could legally handle payments.

Given the size of the opportunity, the Red Team wrote, “ACM can help fund infrastructure engineering improvements to the rest of the platform.” 

But the team found several key risks as well. “We stand to lose significant revenue from our top advertisers,” the team wrote. It speculated that it could also alienate customers and attract significant scrutiny from Congress.

The biggest concerns, though, had to deal with the company’s systems for detecting CSE and non-consensual nudity: “Today we cannot proactively identify violative content and have inconsistent adult content [policies] and enforcement,” the team wrote. “We have weak security capabilities to keep the products secure.” 

Twitter has had several high-profile data breaches. Eventually, Twitter abandoned the project.

Fixing that would be costly, and the company would be likely to make enforcement errors. Non-consensual nudes, they wrote, “can ruin lives when posted and monetized.” 

Moreover, the report said, “There are several challenges to maintaining this as a top priority. … We’re thinking about health as a parallel to monetization, instead of as a prerequisite.”  

Beykpour, Twitter’s former head of product, had pushed Twitter to roll out Real ID — a feature that would require users to upload government documents to prove their identity. If Twitter wanted to monetize adult content, it would need to verify the ages of the people creating that content, as well as the people watching it. But employees had already determined that Real ID presented serious problems. Matching IDs with government databases was expensive and required a secure network. Twitter has had several high-profile data breaches. Eventually, Twitter abandoned the project. 

Soon, the group’s priorities would change completely. On August 23rd, Twitter announced that the health team would be reorganized and combined with a team tasked with identifying spam accounts. The move came amid increasing pressure from Elon Musk, who claimed the company was lying about the number of bots on the platform. 

“It was a gut punch,” says a former researcher on the team. “For Elon Musk to declare that spam was the single most important question that needed to be answered in order for him to buy the company is ludicrous.”

But Twitter’s troubles with Musk — and the internal chaos they would cause — were just beginning.