On Friday, a man released a 73-page racist manifesto and proceeded to kill 50 people in two mosques in Christchurch, New Zealand, video of which he subsequently posted to various social media platforms.
The attack was designed to go viral, with plenty of references used to quickly gain attention online. Since then, various platforms have taken steps to limit the spread of the alleged shooter’s videos, and raised questions about the role that social media plays in spreading hate messages.
Follow along for all of the updates in the aftermath of the attack.
Facebook Chief Operating Officer Sheryl Sandberg outlined three steps that the company is taking
It’s a dramatic shift in company policy
Facebook and YouTube have helped quash terrorism before, and they can do it again
Footage was re-uploaded 1.5 million times in the 24 hours after the attack
4chan, 8chan, and LiveLeak have been blocked
The platform disabled functions to stop the videos
1.2 million were ‘blocked at upload’
More than 100 profiles utilized the name or picture of the alleged shooter
“We stand in support of our fellow New Zealanders”
Small sites pose complicated questions about stopping hate
Turn off this annoying and potentially harmful feature
The subreddit violated site policy on ‘glorifying violence’
It’s not just a matter of flagging and deleting
Killing in the age of SEO