YouTube dealt with an “unprecedented volume” of videos after last week’s mass shooting in New Zealand, as the platform struggled to remove videos with the footage, YouTube’s chief product officer told The Washington Post.
Friday’s killings at two mosques in Christchurch were recorded and posted to social media channels around the world as part of a plan that was seemingly designed to spread the footage online. As the footage made its way around the internet, it was reuploaded repeatedly. Yesterday, Facebook said it removed 1.5 million videos of the attack in the first 24 hours after the shooting.
While YouTube did not say precisely how many videos it ultimately removed, the company faced a similar flood of videos after the shooting, and moderators worked through the night to take down tens of thousands of videos with the footage, chief product officer Neal Mohan told the Post. Some uploads were reportedly altered to evade detection, as users tweaked the footage slightly to prevent automated tools from flagging it.
Copies were reportedly added as quickly as one per second, and, eventually, the platform disabled some searches to limit visibility. YouTube also cut off some human review features to speed the process, Mohan told the Post. (The service said on Friday that it was sending potentially newsworthy videos that contained clips of the footage to humans for review.)
Social media companies are facing new questions about platform moderation after the shooting, which spread not only on the biggest services, but also on the darkest corners of the internet. While the services have said the incident was unprecedented, the spread of the shooting video has led to calls from lawmakers for the companies to do more to police their platforms.