On Friday, a man released a 73-page racist manifesto and proceeded to kill 50 people in two mosques in Christchurch, New Zealand, video of which he subsequently posted to various social media platforms.
The attack was designed to go viral, with plenty of references used to quickly gain attention online. Since then, various platforms have taken steps to limit the spread of the alleged shooter’s videos, and raised questions about the role that social media plays in spreading hate messages.
Follow along for all of the updates in the aftermath of the attack.
Mar 30, 2019
Earlier today, the New Zealand Herald published a letter from Facebook COO Sheryl Sandberg, addressing how the company is addressing the deadly terror attack in Christchurch two weeks ago. In the letter, she lays out three steps that the company is taking, including that it’s “exploring restrictions” for live video.Read Article >
She described the attack as “an act of pure evil,” and that the company is “committed to reviewing what happened,” and that it’s working with the country’s authorities. In the aftermath of the attack, Facebook says that it removed 1.5 million videos of the attack around the world, with 1.2 blocked “at upload.” In her letter, Sandberg says that while Facebook moved quickly to remove the video and the perpetrator’s account, the company could do more, and laid out three steps that it will take. We’ve reached out to Facebook to clarify on the letter, and will update this post if we hear back.
Mar 27, 2019
In a dramatic shift in policy, Facebook announced today that it is banning white nationalism and separatism on its platform. The decision, which was first reported by Motherboard, comes just under two weeks since a white supremacist killed 50 people in Christchurch, New Zealand.Read Article >
In a blog post, Facebook said that any “praise, support, and representation of white nationalism and separatism” would be banned when the new policy goes into effect next week. “It’s clear that these concepts are deeply linked to organized hate groups and have no place on our services,” the blog read.
Mar 21, 2019
After last week’s horrific terrorist attack in New Zealand, early commentary focused on how the shootings at two Christchurch shootings seemed to be purpose-built for spreading on social media. “A mass shooting of, and for, the internet,” Kevin Roose called it in the New York Times:Read Article >
As Roose notes, the alleged killer promoted the attack on Twitter and 8Chan, and broadcast it live on Facebook. Facebook took down the original video, but not before it could be copied and widely and shared. Over the next 24 hours, it would be uploaded to Facebook another 1.5 million times — of which, Facebook was able to remove 1.2 million copies at the time of uploading. The same thing was happening simultaneously on YouTube, but the company would not share any numbers that might describe the scale of its challenge.
Mar 19, 2019
Facebook says footage of the Christchurch shooting was viewed just 200 times during its live broadcast, and 4,000 times in total before it was removed. The company also says it didn’t receive any reports from users about the shooting live stream until 12 minutes after the video had ended.Read Article >
Over the 24 hours following the initial broadcast, individuals attempted to re-upload the video 1.5 million times. The disparity highlights the specific challenges facing the social network, especially when an attack like Christchurch was made to go viral. Videos of the attack were cross-posted across various social networks, and links to the live stream and a manifesto were posted on 8chan prior to the attack.
Mar 18, 2019
Following the Christchurch shooting last week, internet service providers in New Zealand are blocking access to websites that do not respond or refuse to comply to requests to remove reuploads of the shooter’s original live stream.Read Article >
According to Bleeping Computer, sites like 4chan, 8chan, LiveLeak, and the file-sharing site Mega have all been pulled by ISPs like Vodafone, Spark, and Vocus. The ISPs appear to be blocking access at the DNS level to sites that do not respond to the takedown requests, but it’s unclear how effective the blocks will be. Like most web-level blocks, the restrictions are easy to circumvent through the use of a VPN or alternative DNS settings.
Mar 18, 2019
YouTube dealt with an “unprecedented volume” of videos after last week’s mass shooting in New Zealand, as the platform struggled to remove videos with the footage, YouTube’s chief product officer told The Washington Post.Read Article >
Friday’s killings at two mosques in Christchurch were recorded and posted to social media channels around the world as part of a plan that was seemingly designed to spread the footage online. As the footage made its way around the internet, it was reuploaded repeatedly. Yesterday, Facebook said it removed 1.5 million videos of the attack in the first 24 hours after the shooting.
Mar 17, 2019
In the first 24 hours after the deadly mass shooting in New Zealand, Facebook says that it has removed 1.5 million videos that were uploaded of the attack, of which 1.2 million “at upload.”Read Article >
The company made the announcement in a Tweet, following up on a prior announcement that it had been alerted by authorities and removed the alleged shooter’s Facebook and Instagram accounts. Facebook spokeswoman Mia Garlick says that the company is also “removing all edited versions of the video that do not show graphic content.”
Mar 16, 2019
Valve removed more than 100 tributes to the alleged white supremacist responsible for the mass shooting at a mosque in Christchurch, New Zealand from Steam profiles, says Kotaku. In the days after the attack, it found that numerous users updated their profiles to include the name or image of the shooter, and in one case, a GIF of the attack.Read Article >
Kotaku says that it initially saw 66 profiles that paid tribute to the shooter in the aftermath of the attack, which claimed 50 lives. The number of tributes later surpassed 100. After contacting Valve for comment, the tributes were removed, although users were continuing to praise the shooter. Kotaku also says that “hundreds of pages continue to nod towards past mass shooters including perpetrators of massacres in Charleston, Isla Vista and Parkland and of the 2011 mass killing in Norway,” using the killers’ names or images. We have reached out to Valve for comment, and will update this post if we hear back.
Mar 16, 2019
When news broke of the mass shooting in Christchurch, New Zealand that left at least 49 dead Thursday evening, I started watching Twitter. Two surprising themes stood out: people urging their social media followers to protect themselves from accidentally watching an extremely graphic 17-minute video of the attack — and separately, New Zealand and Australian locals condemning local channel Sky News Australia from intentionally sharing parts of that same footage with their viewers.Read Article >
Now, broadcaster Sky New Zealand has taken the drastic step of removing Sky News Australia from their broadcasts over the air. “We stand in support of our fellow New Zealanders and have made the decision to remove Sky News Australia from our platform until we are confident that the distressing footage from yesterday’s events will not be shared,” Sky NZ wrote in a since-deleted tweet, archived by BuzzFeed.
Mar 15, 2019
In the wake of a hate-fueled mass shooting in Christchurch, New Zealand, major web platforms have scrambled to take down a 17-minute video of the attack. Sites like YouTube have applied imperfect technical solutions, trying to draw a line between newsworthy and unacceptable uses of the footage.Read Article >
But Facebook, Google, and Twitter aren’t the only places weighing how to handle violent extremism. And traditional moderation doesn’t affect the smaller sites where people are still either promoting the video or praising the shooter. In some ways, these sites pose a tougher problem — and their fate cuts much closer to fundamental questions about how to police the web. After all, for years, people have lauded the internet’s ability to connect people, share information, and route around censorship. With the Christchurch shooting, we’re seeing that phenomenon at its darkest.
Mar 15, 2019
You’ve probably been caught off guard by videos that play automatically on Facebook, Twitter, and other services; in fact, just across the internet in general. They begin playing as soon as you load a page or (if they’re more deviously implemented) when you start scrolling through a page to catch your attention.Read Article >
Automatic video play is a feature that, while nice to have when it’s surfacing content that’s related to your interests, can be pretty annoying. Autoplay videos can be harmful, too, exposing you to violent, offensive, or otherwise unwanted content that you shouldn’t have to see by default. Several browsers, like Google Chrome and Firefox, now have built-in measures to curb autoplay videos, but for the most part, turning them off is still a very manual process.
Mar 15, 2019
Reddit has banned r/watchpeopledie, an infamous subreddit that hosted videos of people dying gruesomely. The ban comes after the subreddit re-hosted videos of the recent mosque massacres in New Zealand. According to the new landing page, the subreddit was banned for violating Reddit’s content policy about glorifying or encouraging violence.Read Article >
A Reddit spokesperson provided this statement: “We are very clear in our site terms of service that posting content that incites or glorifies violence will get users and communities banned from Reddit. Subreddits that fail to adhere to those site-wide rules will be banned.”
Mar 15, 2019
After a man used Facebook to live stream his attack on two New Zealand mosques last night, the video quickly spread to YouTube. Moderators fought back, trying to take the horrific footage down, but new uploads of the video kept appearing. It led many observers to wonder: since YouTube has a tool for automatically identifying copyrighted content, why can’t it automatically identify this video and wipe it out?Read Article >
Exact re-uploads of the video will be banned by YouTube, but videos that contain clips of the footage have to be sent to human moderators for review, The Verge has learned. Part of that is to ensure that news videos that use a portion of the video for their segments aren’t removed in the process.
Mar 15, 2019
The horrific shooting at two mosques in Christchurch, New Zealand, was designed from the start to get attention, leveraging social media to make sure as many people as possible would hear about the deaths and the hate underpinning them. Officials have reported a “significant” number of people are dead from attacks at two mosques. Several people have been arrested so far. New Zealand police have told people to avoid mosques and told mosques to “shut their doors.”Read Article >
A 17-minute video that seemed to show the shooting was posted to Facebook, YouTube, Twitter, and Instagram. A post on 8chan, a message board, included links to a manifesto and a Facebook page where the poster — an alleged shooter — said a live stream of the attack would be broadcast. Facebook has removed the page and the video, but the video had already traveled.