Skip to main content

Why can’t YouTube automatically catch re-uploads of disturbing footage?

Why can’t YouTube automatically catch re-uploads of disturbing footage?

/

It’s not just a matter of flagging and deleting

Share this story

Multiple Fatalities Following Christchurch Mosque Shootings
Kai Schwoerer/Getty Images

After a man used Facebook to live stream his attack on two New Zealand mosques last night, the video quickly spread to YouTube. Moderators fought back, trying to take the horrific footage down, but new uploads of the video kept appearing. It led many observers to wonder: since YouTube has a tool for automatically identifying copyrighted content, why can’t it automatically identify this video and wipe it out?

Exact re-uploads of the video will be banned by YouTube, but videos that contain clips of the footage have to be sent to human moderators for review, The Verge has learned. Part of that is to ensure that news videos that use a portion of the video for their segments aren’t removed in the process.

YouTube’s safety team thinks of it as a balancing act, according to sources familiar with their thinking. For major news events like yesterday’s shooting, YouTube’s team uses a system that’s similar to its copyright tool, Content ID, but not exactly the same. It searches re-uploaded versions of the original video for similar metadata and imagery. If it’s an unedited re-upload, it’s removed. If it’s edited, the tool flags it to a team of human moderators, both full-time employees at YouTube and contractors, who determine if the video violates the company’s policies.

YouTube also has a system for immediately removing child pornography and terrorism-related content, by fingerprinting the footage using a hash system. But that system isn’t applied in cases like this, because of the potential for newsworthiness. YouTube considers the removal of newsworthy videos to be just as harmful. YouTube prohibits footage that’s meant to “shock or disgust viewers,” which can include the aftermath of an attack. If it’s used for news purposes, however, YouTube says the footage is allowed but may be age-restricted to protect younger viewers.

“They never intended to have something like this in place for these types of events.”

The other issue is that YouTube’s Content ID system isn’t built to deal with breaking news events. Rasty Turek, CEO of Pex, a video analytics platform that is also working on a tool to identify re-uploaded or stolen content, told The Verge that the issue is how the product is implemented. Turek, who closely studies YouTube’s Content ID software, points to the fact that it takes 30 seconds for the software to even register whether something is re-uploaded before being handed off for manual review. A YouTube spokesperson could not tell The Verge if that number was accurate.

“They never intended to have something like this in place for these types of events,” Turek said. “They may after these types of situations, but that’s still going to take months to implement even if [CEO] Susan Wojcicki orders it done today.”

YouTube’s Content ID tool takes “a couple of minutes, or sometimes even hours, to register the content,” Turek said. That’s normally not a problem for copyright issues, but it poses real problems when applied to urgent situations.

Turek says the pressure to do more, and do it faster, is growing. “The pressure never used to be as high,” Turek said. “There is no harm to a society when copyrighted things or leaks aren’t taken down immediately. There is harm to a society here though.”

The next big roadblock, that both YouTube and Turek can agree on, is catching live streams as they happen. It’s nearly impossible, according to Turek, because the content in a live stream is constantly changing.

“You can blame YouTube for many things, but no one on this planet can fix live-streaming right now.”

It’s why live-streaming is considered a high-risk area for YouTube. People who violate rules on live streams, who are sometimes caught using Content ID once the live stream is over, lose their streaming privileges because it’s an area that YouTube can’t police as thoroughly. The teams at YouTube are working on it, according to the company, but it’s one they acknowledge is very difficult. Turek agrees.

“No one can identify this outright,” Turek said. “You can blame YouTube for many things, but no one on this planet can fix live-streaming right now.”

For now, YouTube is focused on combing through all videos that appear with similar metadata and images found in the shooter’s original live stream, and determining what’s newsworthy and what violates its rules. It may be all they can manage right now, but it’s not for critics like Turek.

“This needs to be a priority for the leadership,” he said. “Once they decide this is a priority and give proper resources to the team, the team will solve it. No question about it.”