Skip to main content

Everything that the big social networks banned this week, ranked

Everything that the big social networks banned this week, ranked


Reddit, Facebook, YouTube, TikTok, and Twitch are bringing out the ban hammers

Share this story

Photo by FRED TANNEAU/AFP via Getty Images

Rarely do we see so many ban hammers drop over a 48-hour period as we have seen this week. The big social platforms, who were once loath to intervene in matters of political speech, are getting much more comfortable with the idea. Today let’s look at what happened and why — and, for the sake of variety, let’s try to rank them in order of their long-term importance.

5. YouTube bans a group of far-right creators. The most remarkable thing about YouTube getting rid of Stefan Molyneux, David Duke, and Richard Spencer, among others, is that it took so long. An exhausted “finally!” is often exclaimed in cases like this — but here even Facebook, which faces near-constant accusations of bending over backwards to appease the far right, had removed some of these accounts years ago. Spencer was removed in 2018; Duke was removed last year. (Molyneux, a white nationalist and proponent of racist science, still has his Facebook page, for reasons I can’t fathom; even MailChimp has gotten rid of him.)

As Julia Alexander explained at The Verge, the bans were made possible by an update to its policies in June 2019 disallowing hate speech. Why did it take a full year to enforce this policy? Basically, it takes three strikes to get kicked off YouTube, and somehow, none of these creators had earned three strikes before the 2019 policy change. YouTube may have removed some of their videos after the change, but it doesn’t retroactively issue strikes, on fairness grounds. And so it took a year for the white supremacists to strike out.

Banning them now can’t make up for the fact that YouTube spent years recruiting large audiences for these men and others using algorithmic recommendations, giving them power that they will carry with them to whatever platforms will still have them. On the other hand, the number of places where they can easily find new followers on the internet appears to be shrinking, and for that at least I’m grateful.

4. Facebook bans a violent Boogaloo network. It was just a few days ago that I wrote this column about how the Boogaloo movement — a loose-knit collection of anti-government types, some of whom are agitating for a second civil war — had hijacked social networks to spread. Law enforcement officials say the group’s adherents used Facebook to plan the murder of a federal agent. When I wrote my piece, the company told me changes were coming, and today they arrived.

The gist is that a subset of the Boogaloo group has now been designated as a “dangerous organization” by Facebook, which triggers a variety of enforcement mechanisms. On Tuesday the company removed 220 Facebook accounts, 95 Instagram accounts, 28 Facebook pages, and 106 Facebook groups, plus more than 400 other groups and 100 other pages that were not connected to the main group but posted similar extremist content.

It seems obvious that Facebook would ban any violent anti-government group that was using its services to plan murder. What makes this one a shade more interesting to watch is how diffuse the Boogaloo movement itself is. It continuously changes the names with which it refers to itself, the insignias and clothing that members wear in public, and even the ideologies it espouses. (There is some confusion as to whether adherents who have voiced support for Black Lives Matter protesters are sincere.) Even for a company with the resources of Facebook, keeping track of the Boogaloo movement promises to be a significant challenge — and given the group’s alleged propensity for real-world violence, the stakes of getting it wrong are quite high.

3. Reddit bans The_Donald. The practical effect of Reddit removing what was perhaps its most notorious forum is likely to be limited. Its membership, which once numbered more than 800,000, had largely decamped to another website after Reddit placed The_Donald behind a warning screen last year. But symbolically, the move represents a significant shift in philosophy for Reddit, which once counted itself a free-speech bastion in the mold of early Twitter. CEO Steve Huffman put it to me this way:

“Reddit’s mission is to bring community and belonging to everybody in the world, and there is speech in the world and on Reddit that prevents other people from doing so,” Huffman told reporters. “Harassing speech or hateful speech prevents people from coming to Reddit and feeling safe and sharing their vulnerabilities ... So if we have speech on Reddit that’s preventing people from using Reddit the way that we intend it to be used, or that prevents us from achieving our mission, then it’s actually a very easy decision.”

Huffman also gave a thoughtful interview to Kevin Roose at the New York Times. Here’s what he said when asked about something else I mention in my story — his statement that he had shifted his thinking around the balance between free speech and safety.

“Over the years, we’ve been increasingly confronted with difficult decisions, and we have to weigh these trade-offs. And so here we are, believing that free speech and free expression are really important, and that’s one of the things that makes Reddit special, but at the same time, seeing that allowing everything is working against our mission.

The way out, for us, has been through our mission: What are we trying to accomplish on Reddit? And what’s the best path to get there?”

This strikes me as notable at a time when Facebook’s mission has been recast as “giving everyone a voice.” Reddit says its mission is to “bring community and belonging to everybody in the world.” And turns out that more people feel a sense of community and belonging when you give slightly fewer people a voice.

2. Twitch temporarily bans President Trump. Let’s count the ways in which this is a big deal. One, even if temporary, here’s an Amazon-owned streaming service de-platforming the president of the United States. Two, the company explicitly cited “hateful conduct” as the reason — a charge that, however accurate, other platforms have tried not to say out loud. Third, the stream that led to this de-platforming was a video from 2015 that had aired on many mainstream television stations. It was Trump’s 2015 campaign kickoff rally, where he said — among other things — that “Mexico was sending rapists to the United States,” as Jake Kastrenakes put it at The Verge.

This puts Twitch in the position of having taken a stronger position against the president’s hate speech than, for example, CNN. Assuming it continues to hold Trump to this higher-than-usual standard, and Trump’s divisive rhetoric escalates further over the summer, Twitch could be the first major social platform to permanently de-platform the president. And that will have implications that go far beyond Twitch.

  1. India bans TikTok. Yes, it has happened before, but it feels different this time. It’s one thing to regulate an app over concerns related to porn, or competition, or privacy, or some other domestic issue. It’s quite another to use an app as a pawn in a geopolitical turf war — one that might accelerate the splintering of the internet into walled-off regions. India and China are in a conflict over a disputed border in the Himalayas that recently resulted in the death of 20 Indian soldiers. India’s Ministry of Electronics and Information Technology responded by banning TikTok — along with WeChat and 57 other apps the country views as “engaged in activities ... prejudicial to sovereignty and integrity of India.” 

Maybe TikTok will come back to India within a week or so, as it did the first time. But it seems likely we’ll see more bans like this over time, in more countries around the world. As Ben Thompson points out in a sharp piece at Stratechery, countries can use the centralization offered by the App Store — as well as old-fashioned internet service providers — to wage diplomacy in novel ways. It’s one thing to ban an app for content hosted within the app — YouTube has been blocked in various countries over the years for just that reason — and quite another to ban it as part of a game of brinkmanship between nation-states. If I were another big consumer app, I would be paying very, very close attention to this.


Yesterday I wrote about the Facebook ad boycott, and heard back from some people who thought I had been a little too rough on the advertisers — or a little too easy on the social network. One thing I heard is that the boycott is based on advertisers’ sincere belief that Facebook is going too easy on Trump — particularly over that “when the looting starts, the shooting starts” from a few weeks back. I look at that question from the standpoint of Facebook already removing three Trump posts this year alone, which chafes against the idea that the company is too scared to act.

In any case, my colleague Russell Brandom talked to Jade Magnus Ogunnaike, Color of Change’s deputy senior campaign director, about what the campaign really wants. Color of Change is part of the coalition of civil rights group leading the advertiser boycott, and it’s worth reading her take on the movement and this week’s big bans:

Jade Magnus Ogunnaike: There are no quick fixes for companies like Reddit that have been steeped in racist culture from the very beginning. We can’t just cheer on the quick things. Companies need to actually undergo civil rights audits. They need to look at how racism and discrimination are showing up at every level in the company.

There’s no one thing you can do to fix racism in your company or to affirm that Black lives matter. What actually has to happen is that companies have to commit to a living wage for all of their employees, and then they need to invest in civil rights audits and take it step by step to implement those changes.

The Ratio

Today in news that could affect public perception of the big tech platforms.

🔼 Trending up: Netflix is shifting $100 million to Black-owned banks. It’s the largest company yet to pledge cash to historically underfunded financial institutions. (Lucas Shaw / Bloomberg)

🔼 Trending up: Facebook is updating its News Feed algorithm to prioritize original reporting. It will also demote stories that aren’t transparent about who has written them. (Sara Fischer / Axios)


On Monday, California reached its highest single-day count of COVID-19 cases, as reported by the Los Angeles Times. More than 8,000 people were infected. Today, the state passed another marker: more than 6,000 people have died due to coronavirus-related causes.

It’s tempting to see the spike in cases as a result of an increase in testing. But officials say that’s not it. “Health officials have attributed the rising numbers to a combination of events: the further reopening of many businesses, mass protests over the death of George Floyd and clusters of private gatherings,” reports the Los Angeles Times.

That’s despite the fact that Gov. Gavin Newsom has been cautious about reopening the state. He issued a stay-at-home for all residents on March 19th, and allowed counties to reopen only when they met certain criteria for testing and rates of infection.

Then there’s San Quentin. In mid-March, officials at the California state prison said there was “no indication” the facility had a coronavirus outbreak. As a precaution, however, the prison stopped all routine visitations and overnight family visits — a major blow to those living behind bars.

Then in June, 121 inmates were transferred from a prison in Chino to San Quentin. Chino’s California Institution for Men had been “an early hotbed of coronavirus cases.” Now, 1,059 people living at San Quentin have been infected, and a man on death row has died. “While it is impossible to ensure the safety of people living in prisons operating well above the capacity they were designed to house, the administration at San Quentin neglected to enact even the most basic health and safety measures,” wrote No More Tears, an organization founded by men incarcerated at San Quentin.

I’m working on an article about the situation at San Quentin. If you have any information to share, please email — Zoe Schiffer


India’s unprecedented decision to ban 59 of China’s largest apps could threaten the country’s rise as a global tech power. It also serves as a warning to China’s tech giants that have thrived behind a government-imposed Great Firewall that kept out many of America’s best-known tech companies. Bloomberg has the story:

The surprise moratorium hit Chinese internet companies just as they were beginning to make headway in the world’s fastest-growing mobile arena, en route to going global and challenging American tech industry supremacy. TikTok had signed up 200 million users there, Xiaomi Corp. is the No. 1 smartphone brand, and Alibaba and Tencent have aggressively pushed their services.

But India’s policy jeopardizes all those successes, and could have wider geopolitical consequences as the U.S. seeks to rally countries to stop using Huawei Technologies Co. for 5G networks. With China’s tech companies poised to become some of the most dominant in emerging industries like artificial intelligence, India’s actions may spur countries around the world to weigh the extent to which they let China gain user data — and potentially economic leverage in future disputes.

A growing number of internet service providers in India started to block their subscribers from accessing TikTok a day after the government banned the app. TikTok, along with 58 other Chinese apps, was banned due to security and privacy concerns. (Manish Singh / TechCrunch)

Months before it’s set to start reviewing content moderation decisions, Facebook’s Oversight Board faces criticism from a nonprofit that says it should already be up and running. Accountable Tech, a progressive nonprofit, launched a campaign to push the board to demand more authority over content decisions. (Olivia Solon / NBC)

Here’s how the Facebook ad boycott started, and all the companies that are now involved. The list now includes Unilever, Verizon, Coca-Cola, Starbucks and Microsoft. (Kim Lyons / The Verge)

People in Hong Kong are self-censoring in anticipation of Beijing’s new national security laws, which outlaw activities related to “separatism, subversion, terrorism and foreign interference” in Hong Kong. Some are deleting their social media profiles entirely. (Kenji Kawase and Michelle Chan / Nikkei Asian Review)

Liu Keqing, a Chinese opera singer, bears a striking resemblance to Xi Jinping, China’s top leader. In China’s increasingly authoritarian system, this resemblance has gotten the attention of Chinese authorities, who’ve started censoring his social media profiles. (Javier C. Hernández / The New York Times)


Organizers behind the viral 👁👄👁 stunt succeeded in funneling hundreds of thousands of dollars from Silicon Valley to racial justice charities. But the campaign also drew criticism from those who felt it was making light of a serious issue. Here’s Arielle Pardes at Wired:

For many on the sidelines, the 👁👄👁 spectacle seemed like internet tomfoolery at its finest. The stunt had taken Silicon Valley’s free-flowing capital and reinvested it in charity. The joke came at the expense of VCs and the tech elite, who had fallen for the trap of chasing the new, shiny thing, and worked for the benefit of organizations that have long struggled for funding. The “eye mouth eye gang” were Twitter’s woke Robin Hood.

Others recoiled at the joke. The longer it went on, the more it began to raise questions about the memeification of social movements. Technology moves fast—it also breaks things. For some, the emoji activism of 👁👄👁 rang hollow, especially at a time when record numbers of protesters have taken to the streets daily to support the Black Lives Matter movement and when serious conversations about diversity in tech are finally starting to happen. “The creation of the culture of lighthearted frivolity around a serious issue turned it into a game, which has no long-term effects,” tweeted venture capitalist Del Johnson. “‘We are building the next Clubhouse … nevermind it’s just about black lives, fooled you.’ Completely disrespectful to the issue at hand.”

Discord rolled out changes to make the platform safer and more inclusive for people outside the gaming industry. It added a safety center with community guidelines meant to govern how people can act on the platform. It also raised a fresh $100 million. (Discord)

Two of beauty YouTube’s biggest stars, Shane Dawson and Jeffree Star, are facing cancellation over allegations of racism and the sexualization of minors. YouTube suspended monetization on Dawson’s channels in the wake of the allegations. (Taylor Lorenz / The New York Times)

TikTok is moving its privacy operations for European users to Ireland and the UK. The company’s Irish and UK entities will take over from TikTok Inc. in the United States. (Ciara O’Brien / The Irish Times)

TikTok’s security measures can be “easily circumvented” to create fake accounts, according to security research firm Ghost Data. The firm’s tests found TikTok will likely become an easier target for bots, which could make it more vulnerable to political disinformation campaigns. (Alex Heath / The Information)

Google bought North, a company focused on building augmented reality glasses. The companies said North’s “technical expertise” will help Google invest in its “hardware efforts and ambient computing future.” (Ashley Carman / The Verge)

Google removed 25 Android applications from the Google Play Store for stealing Facebook credentials. Before being taken down the apps have collectively been downloaded more than 2.34 million times. (Catalin Cimpanu / ZDNet)

David Heinemeier Hansson, the cofounder of Basecamp and the subscription email service Hey, is a vocal critic of big tech companies on Twitter. His recent attacks of Apple over its App Store policies have gotten widespread support. (Zoë Bernard / The Information)

Things to do

Stuff to occupy you online during the quarantine.

Buy a mask that says “The content of this mask is no longer available due to a copyright claim.” Proceeds from the sale benefit the very good website Techdirt.

Read a history of Facebook’s Oversight Board to date. Kate Klonick has been following it from the beginning, with a great deal of access to all of the principals. She published a comprehensive account of the board’s creation today in the Yale Law Journal.

Read about how Hey evolved. Before it was an email platform and the bane of Apple’s existence, Basecamp’s latest project was just a series of screenshots. Co-founder Jason Fried walks through its development here.

And finally...

Talk to us

Send us tips, comments, questions, and banned content: and