Skip to main content

How Cloudflare got Kiwi Farms wrong

Why more platforms need to close the stochastic terrorism loophole

Share this story

Photo by Amelia Holowaty Krales / The Verge

Today let’s talk about Kiwi Farms, Cloudflare, and whether infrastructure providers ought to take more responsibility for content moderation than they have generally taken.


Kiwi Farms is a nearly 10-year-old web forum, founded by a former administrator for the popular QAnon wasteland 8chan, that has become notorious for waging online harassment campaigns against LBGT people, women, and others. It came to popular attention in recent weeks after a well known Twitch creator named Clara Sorrenti spoke out against the recent wave of anti-trans legislation in the United States, leading to terrifying threats and violence against her by people who organized on Kiwi Farms.

Ben Collins and Kat Tenbarge wrote about the situation at NBC:

Sorrenti, known to fans of her streaming channel as “Keffals,” says that when her front door opened on Aug. 5 the first thing she saw was a police officer’s gun pointed at her face. It was just the beginning of a weekslong campaign of stalking, threats and violence against Sorrenti that ended up making her flee the country. 

Police say Sorrenti’s home in London, Ontario, had been swatted after someone impersonated her in an email and said she was planning to perpetrate a mass shooting outside of London’s City Hall. After Sorrenti was arrested, questioned and released, the London police chief vowed to investigate and find who made the threat. Those police were eventually doxxed on Kiwi Farms and threatened. The people who threatened and harassed Sorrenti, her family and police officers investigating her case have not been identified.

In response to the harassment, Sorrenti began a campaign to pressure Cloudflare into no longer providing its security services to Kiwi Farms. Thanks to her popularity on Twitch, and the urgency of the issue, #DropKiwiFarms and #CloudflareProtectsTerrorists both trended on Twitter. And the question became what Cloudflare — a company that has been famously resistant to intervening in matters of content moderation — would do about it.

Most casual web surfers may be unaware of Cloudflare’s existence. But the company’s offerings are essential to the functioning of the internet. And it provided at least three services that have been invaluable to Kiwi Farms.

Twice before in its history, Cloudflare has confronted related high-profile controversies in moderation

One, Cloudflare made Kiwi Farms faster and thus easier to use, by generating thousands of copies of it and storing it at end points around the world, where they could be more quickly delivered to end users. Two, it protected Kiwi Farms from distributed denial-of-service (DDoS) attacks, which can crash sites by overwhelming them with bot traffic. And third, as Alex Stamos points out here, it hid the identity of their web hosting company, preventing people from pressuring the hosting provider to take action against it.

Cloudflare knew it was doing all this, of course, and it has endeavored to make principled arguments for doing so. Twice before in its history, it has confronted related high-profile controversies in moderation — once in 2017, when it turned off protection for the neo-Nazi site the Daily Stormer, and again in 2019, when it did the same for 8chan. In both cases, the company took pains to describe the decisions as “dangerous” — warning that it would create more pressure on infrastructure providers to shut down other websites, a situation that would likely disproportionately hurt marginalized groups.

Last week, as pressure on the company to do something about Kiwi Farms grew, Cloudflare echoed that sentiment in a blog post. (One that did not mention Kiwi Farms by name.) Here are CEO Matthew Prince and head of public policy Alissa Starzak:

“Giving everyone the ability to sign up for our services online also reflects our view that cyberattacks not only should not be used for silencing vulnerable groups, but are not the appropriate mechanism for addressing problematic content online. We believe cyberattacks, in any form, should be relegated to the dustbin of history.”

It’s admirable that Cloudflare has been so principled in developing its policies and articulating the rationale behind them. And I share the company’s basic view of the content moderation technology stack: that the closer you get to hosting, recommending, and otherwise driving attention to content, the more responsibility you have for removing harmful material. Conversely, the further you get from hosting and recommending, the more reluctant you should be to intervene.

The logic is that it is the people hosting and recommending who are most directly responsible for the content being consumed, and who have the most context on what the content is and why it might (or might not be) a problem. Generally speaking, you don’t want Comcast deciding what belongs on Instagram.

These policies are undeniably convenient to Cloudflare

Cloudflare also argues that we should pass laws to dictate what content should be removed, since laws emerge from a more democratic process and thus have more legitimacy. I’m less sympathetic to the company on that front: I like the idea of making content moderation decisions more accountable to the public, but I generally don’t want the government intervening in matters of speech.

However principled these policies are, though, they are undeniably convenient to Cloudflare. They allow the company to rarely have to consider content moderation issues, and this has all sorts of benefits. It helps Cloudflare serve the largest number of customers; keep it out of hot-button cultural debates; and stay off the radar of regulators who are increasingly skeptical of tech companies moderating too little — or too much.

Generally speaking, when companies can push content moderation off on someone else, they do. There’s generally very little upside in policing speech, unless it’s necessary for the survival of the business.


But I want to return to that sentiment in the company’s blog post, the one that says: “Giving everyone the ability to sign up for our services online also reflects our view that cyberattacks not only should not be used for silencing vulnerable groups, but are not the appropriate mechanism for addressing problematic content online.” The idea is that Cloudflare wants to take DDoS and other attacks off the table for everyone, both good actors and bad, and that harassment should be fought in (unnamed) other ways.

Certainly it would be a good thing if everyone from local police departments to national lawmakers took online harassment more seriously, and developed a coordinated strategy to protect victims from doxxing, swatting, and other common vectors of online abuse — while also doing better at finding and prosecuting their perpetrators.

In practice, though, they don’t. And so Cloudflare, inconvenient as it is for the company, has become a legitimate pressure point in the effort to stop these harassers from threatening or committing acts of violence. Yes, Kiwi Farms could conceivably find other security providers. But there aren’t that many of them, and Cloudflare’s decision to stop services for the Daily Stormer and 8chan really did force both operations further underground and out of the mainstream.

Cloudflare’s decision arguably made it complicit in whatever happened

And so its decision to continue protecting Kiwi Farms arguably made it complicit in whatever happened to poor Sorrenti, and anyone else the mob might decide to target. (Three people targeted by Kiwi Farms have died by suicide, according to Gizmodo.)

And while we’re on the subject of complicity, it’s notable that for all its claims about wanting to bring about an end to cyberattacks, Cloudflare provides security services to… makers of cyberattack software! That’s the claim made in this blog post from Sergiy P. Usatyuk, who was convicted of running a large DDoS-for-hire scheme. Writing in response to the Kiwi Farms controversy, Usatyuk notes that Cloudflare profits from such schemes because it can sell protection to the victims.

In its blog post, Cloudflare compares itself to a fire department that puts out fires no matter how bad a person the resident of the house may be. In response, Usatyuk writes: “CloudFlare is a fire department that prides itself on putting out fires at any house regardless of the individual that lives there. What they forget to mention is they are actively lighting these fires and making money by putting them out!”

Again, none of this is to say that there aren’t good reasons for Cloudflare to stay out of most moderation debates. There are! And yet it does matter to whom the company decides to deploy its security guards — a service it often provides for free, incidentally — enabling harassment and worse for a small but committed group of the worst people on the internet.


In the aftermath of Cloudflare’s initial blog post, Stamos predicted the company’s stance wouldn’t hold. “There have been suicides linked to KF, and soon a doctor, activist or trans person is going to get doxxed and killed or a mass shooter is going to be inspired there,” he wrote. “The investigation will show the killer’s links to the site, and Cloudflare’s enterprise base will evaporate.”

Fortunately, it hasn’t yet come to that. But credible threats against individuals did escalate over the past several days, the company reported, and on Saturday Cloudflare did indeed reverse course and stopped protecting Kiwi Farms.

“This is an extraordinary decision for us to make and, given Cloudflare’s role as an Internet infrastructure provider, a dangerous one that we are not comfortable with,” Prince wrote in a new blog post. “However, the rhetoric on the Kiwi Farms site and specific, targeted threats have escalated over the last 48 hours to the point that we believe there is an unprecedented emergency and immediate threat to human life unlike we have previously seen from Kiwi Farms or any other customer before.”

“We do not believe we have the political legitimacy to determine generally what is and is not online”

It feels like a massive failure of social policy that the safety of Sorrenti and other people targeted by online mobs comes down to whether a handful of companies will agree to continue protecting their organizing spaces from DDoS attacks, of all things. In some ways, it feels absurd. We’re offloading what should be a responsibility of law enforcement onto a for-profit provider of arcane internet backbone services.

“We do not believe we have the political legitimacy to determine generally what is and is not online by restricting security or core Internet services,” the company wrote last week. And arguably it doesn’t!

But sometimes circumstances force your hand. If your customers are plotting violence — violence that may in fact be possible only because of the services you provide — the right thing to do isn’t to ask Congress to pass a law telling you what to do. It’s to stop providing those services.

There isn’t always a clear moment when an edgy forum, full of trolls, tips over into incitement of violence. Instead, far-right actors increasingly rely on “stochastic terrorism” — actively dehumanizing groups of people over long periods of time, suggesting that it sure would be nice if someone did something about “the problem,” confident that some addled member of their cohort will eventually take up arms in an effort to impress their fellow posters.

Infrastructure providers can’t turn a blind eye until the last possible moment

One reason why this has been so effective is that it is a strategy designed to resist content moderation. It offers cover to the many social networks, web hosts, and infrastructure providers that are looking for reasons not to act. And so it has become a loophole that the far right can exploit, confident that so long as they don’t explicitly call for murder they will remain in the good graces of the platforms.

It’s time for that loophole to close. In general we should resist calls for infrastructure providers to intervene on matters of content moderation. But when those companies provide services that aid in real-world violence, they can’t turn a blind eye until the last possible moment. Instead, they should recognize groups that organize harassment campaigns much earlier, and use their leverage to prevent the loss of life that will now forever be linked to Kiwi Farms and the tech stack upon which it sat.

In its blog posts, Cloudflare refers repeatedly to its desire to protect vulnerable and marginalized groups. Fighting for a free and open internet, one that is resistant to pressure from authoritarian governments to shut down websites, is a critical part of that. But so, too, is offering actual protection to the vulnerable and marginalized groups that are being attacked by your customers.

I’m glad Cloudflare came around in the end. Next time, I hope it will get there faster.

Today’s Storystream

Feed refreshed 25 minutes ago Midjourneys

Mary Beth Griggs25 minutes ago
NASA’s SLS rocket is secure as Hurricane Ian barrels towards Florida.

The rocket — and the Orion spacecraft on top — are now back inside the massive Vehicle Assembly Building. Facing menacing forecasts, NASA decided to roll it away from the launchpad yesterday.

External Link
Andrew J. HawkinsTwo hours ago
Harley-Davidson’s electric motorcycle brand is about to go public via SPAC

LiveWire has completed its merger with a blank-check company and will make its debut on the New York Stock Exchange today. Harley-Davison CEO Jochen Zeitz called it “a proud and exciting milestone for LiveWire towards its ambition to become the most desirable electric motorcycle brand in the world.” Hopefully it also manages to avoid the cash crunch of other EV SPACs, like Canoo, Arrival, Faraday Future, and Lordstown.

Asian America learns how to hit back

The desperate, confused, righteous campaign to stop Asian hate

Esther WangSep 26
The Verge
Andrew WebsterTwo hours ago
“There’s an endless array of drama going on surrounding Twitch right now.”

That’s Ryan Morrison, CEO of Evolved Talent Agency, which represents some of the biggest streamers around. And he’s right — as you can read in this investigation from my colleague Ash Parrish, who looked into just what’s going on with Amazon’s livestreaming service.

The Verge
Richard LawlerTwo hours ago
Green light.

NASA’s spacecraft crashed, and everyone is very happy about it.

Otherwise, Mitchell Clark is kicking off the day with a deeper look at Dish Network’s definitely-real 5G wireless service , and Walmart’s metaverse vision in Roblox is not looking good at all.

External Link
Jess Weatherbed11:49 AM UTC
Won’t anyone think of the billionaires?

Forbes reports that rising inflation and falling stock prices have collectively cost members of the Forbes 400 US rich list $500 billion in 2022 with tech tycoons suffering the biggest losses.

Jeff Bezos (worth $151 billion) lost $50 billion, Google’s Larry Page and Sergey Brin (worth a collective $182b) lost almost $60b, Mark Zuckerberg (worth $57.7b) lost $76.8b, and Twitter co-founder Jack Dorsey (worth $4.5b) lost $10.4b. Former Microsoft CEO Steve Ballmer (worth $83b) lost $13.5b while his ex-boss Bill Gates (worth $106b) lost $28b, albeit $20b of that via charity donations.

Thomas Ricker6:45 AM UTC
Check out this delightful DART Easter egg.

Just Google for “NASA DART.” You’re welcome.

Richard Lawler12:00 AM UTC
A direct strike at 14,000 mph.

The Double Asteroid Redirection Test (DART) scored a hit on the asteroid Dimorphos, but as Mary Beth Griggs explains, the real science work is just beginning.

Now planetary scientists will wait to see how the impact changed the asteroid’s orbit, and to download pictures from DART’s LICIACube satellite which had a front-row seat to the crash.

The Verge
We’re about an hour away from a space crash.

At 7:14PM ET, a NASA spacecraft is going to smash into an asteroid! Coverage of the collision — called the Double Asteroid Redirection Test — is now live.

Emma RothSep 26
There’s a surprise in the sky tonight.

Jupiter will be about 367 million miles away from Earth this evening. While that may seem like a long way, it’s the closest it’s been to our home planet since 1963.

During this time, Jupiter will be visible to the naked eye (but binoculars can help). You can check where and when you can get a glimpse of the gas giant from this website.

Emma RothSep 26
Missing classic Mario?

One fan, who goes by the name Metroid Mike 64 on Twitter, just built a full-on 2D Mario game inside Super Mario Maker 2 complete with 40 levels and eight worlds.

Looking at the gameplay shared on Twitter is enough to make me want to break out my SNES, or at least buy Super Mario Maker 2 so I can play this epic retro revamp.

External Link
Russell BrandomSep 26
The US might still force TikTok into a data security deal with Oracle.

The New York Times says the White House is still working on TikTok’s Trump-era data security deal, which has been in a weird limbo for nearly two years now. The terms are basically the same: Oracle plays babysitter but the app doesn’t get banned. Maybe it will happen now, though?

Richard LawlerSep 26
Don’t miss this dive into Guillermo del Toro’s stop-motion Pinocchio flick.

Andrew Webster and Charles Pulliam-Moore covered Netflix’s Tudum reveals (yes, it’s going to keep using that brand name) over the weekend as the streamer showed off things that haven’t been canceled yet.

Beyond The Way of the Househusband season two news and timing information about two The Witcher projects, you should make time for this incredible behind-the-scenes video showing the process of making Pinocchio.