YouTube says it will crack down on bizarre videos targeting children

Earlier this week, a report in The New York Times and a blog post on Medium drew a lot of attention to a world of strange and sometimes disturbing videos on YouTube aimed at young children. The genre, which we reported on in February of this year, makes use of popular characters from family-friendly entertainment, but it’s often created with little care, and can quickly stray from innocent themes to scenes of violence or sexuality.

In August of this year, YouTube announced that it would no longer allow creators to monetize videos which “made inappropriate use of family friendly characters.” Today it’s taking another step to try and police this genre.

“We’re in the process of implementing a new policy that age restricts this content in the YouTube main app when flagged,” said Juniper Downs, YouTube’s director of policy. “Age-restricted content is automatically not allowed in YouTube Kids.” YouTube says that it’s been formulating this new policy for a while, and that it’s not rolling it out in direct response to the recent coverage.

The first line of defense for YouTube Kids are algorithmic filters. After that, there is a team of humans that review videos which have been flagged. If a video with recognizable children’s characters gets flagged in YouTube’s main app, which is much larger than the Kids app, it will be sent to the policy review team. YouTube says it has thousands of people working around the clock in different time zones to review flagged content. If the review finds the video is in violation of the new policy, it will be age restrictied, automatically blocking it from showing up in the Kids app.

YouTube says it typically takes at least a few days for content to make its way from YouTube proper to YouTube Kids, and the hope is that within that window, users will flag anything potentially disturbing to children. YouTube also has a team of volunteer moderators, which it calls Contributors, looking for inappropriate content. YouTube says it will start training its review team on the new policy and it should be live within a few weeks.

Along with filtering content out of the Kids app, the new policy will also tweak who can see these videos on YouTube’s main service. Flagged content will be age restricted, and users won’t be able to see those videos if they’re not logged in on accounts registered to users 18 years or older. All age-gated content is also automatically exempt from advertising. That means this new policy could put a squeeze on the booming business of crafting strange kid’s content.

YouTube is trying to walk a fine line between owning up to this problem and arguing that the issue is relatively minor. It says that the fraction of videos on YouTube Kids that were missed by its algorithmic filters and then flagged by users during the last 30 days amounted to just 0.005 percent of videos on the service. The company also says the reports that inappropriate videos racked up millions of views on YouTube Kids without being vetted are false, because those views came from activity on YouTube proper, which makes clear in its terms of service that it’s aimed at user 13 years and older.

In today’s policy announcement, YouTube is acknowledging the problem, and promising to police it better. It doesn’t want to outright ban the use of family-friendly characters by creators who aren’t the original copyright holders across all of YouTube. There is a place, the company is arguing, for satire about Peppa Pig drinking bleach, however distasteful you might find it. But YouTube is acknowledging that YouTube Kids requires even more moderation. And, the company is willing to forgo additional ad revenue — and there is a lot of money flowing through this segment of the industry — if that’s what it takes to ensure YouTube Kids feels like a safe experience for families.


That Medium post is top quality and you all should read it. It really puts the YouTube rabbit hole into perspective, each video’s recommendations leading further and further into the depths of inanity, an increasingly stark space devoid of any meaningful content, possibly run by an advanced botnet hibernating within the world’s shared connected space, learning how young humans think and process information in an attempt to either prepare itself for war with mature humans, or to condition them to respond favorably to pacifying stimuli on command as a method of preemptive mental warfare.



buy gold

You mean it’s not just the condition of a postcapitalist neoliberal dystopia?

I almost blacked out

Longest. Sentence. Ever.

This was just… sublime.

And, the company is willing to forgo additional ad revenue

Can someone explain this? One video removed is just another video served, the revenue from ads for Youtube doesn’t change. If it’s mentioned in some PR from Google it would help to copy their wording or something.

Also, I hate this "0.005%" bs excuse from Youtube. All it takes is a few that slips past the filter to mess with hundreds or thousands of kids, for a site at the scale of Youtube.

It’s all about the views… ’bout the views, ’bout the views… not ad servers.
The new video replacing the "weird" one may not have the same number of views.
Not sure how ads are matched up with content, but if it is not a completely randomized mix and match, the number of views for a channel may have an impact on certain types of ads…

I don’t see how views is any different, unless the weird ones somehow get more views than the non-weird ones. If anything the non-weird videos would have more views? You could be right on how some channels may allow more advertising or more lucrative ones, and those channels serve more weird videos? But that’s not a views situation.

And they weren’t before?

YouTube, for once, shouldn’t rely on users flagging because that first flag could be the parent of a traumatised child.

YouTube needs to actually hire people to moderate content and for videos that get demonetised. If they don’t do this, they’re just going to go from PR crisis to PR crisis with a permanent stain on their reputation after each one.

There’s over 4000 hours uploaded to youtube each minute according to YouTube. Monitoring them manually just isn’t humanly (or business wise) possible. Just for YouTube kids however, that should be more viable.

Pretty sure it won’t be viable even for YouTube kids, the amount of super long content there is insane, and pumped out at insane speeds by automated processes.

Because flagging is mostly used in a non-retaliatory manner by users or other entitites. Or, you know it is reallyy easy to vet people agaings biases towards or against religious or atheism (or the range inbetween), or race, or sexuality or political views.

That’s impossible. First of all, there’s just too much content uploaded to youtube for that to happen. Secondly, at that point Youtube legally becomes a publisher, which gets them into a huge legal rabbit hole.

Honestly, YouTube should have an actual person review the videos before they make it to YouTube kids. Period. When it first launched, I assumed this was happening. I’m really not pleased to find that’s not the case. Frankly, it should be a much more curated experience. Not only should they filter out anything inappropriate, but they should also filter out the absolute crap.

1 person should be able to manage that aye…

Do you know how many days of video get uploaded to youtube, every second?

Yes, but that shouldn’t matter. (I don’t think GoodTroll meant literally one person).

But if it means that YouTube Kids starts off with only a few thousand hours of content from trusted sources (PBS, for example) and it has to add some more each day or something, so be it. Real people—and like the article says, there are thousands—should be verifying every single video available on YouTube Kids.

Yep, this is exactly what I meant. I’m not saying that they should review every video on the full YouTube, just those that they’re channeling into the kids app.

I misread that you were referring to "YouTube Kids". I agree, it should be a fully curated section of YouTube

Give zero fucks. It’s positioned by Google as OK for kids. To ensure that is the case they need people to vet content that makes it to that portal. They need to actively curate the content.

Bots can’t do it.

I’m so sick and tired of apologists arguing that it’s "impossible" for YouTube to police its content. It’s not. It’s as simple as this:

1. Block uploading for everyone except the most trusted YouTube partners.
2. When someone tries to upload, announce the moratorium, then put up a big ass window that announces that there are "new rules" and there will be a zero tolerance policy if anyone breaks them. Make everyone read each of these rules one by one and not advance to the next one until they have clicked a button. Heck, make everyone take a quiz, too.
3. Don’t allow anyone to upload until they’ve clicked a button that says "I understand the new rules, and understand that YouTube may close my channel at any time."

This isn’t going to stop everyone determined to post crap on YouTube but the measure would definitely cause many people to take pause before uploading and cut down on these types of videos considerably.

You have it all figured out eh? Maybe they should just hire you!

In all seriousness though, what you suggested won’t help in the slightest. There is frankly no permanent solution to the problem described in the article. No matter how many hoops you make people jump over, if the end result is profitable, they will find a way to play the system.

You have it all figured out eh? Maybe they should just hire you!

I don’t have anything figured out. I just know that this is something similar to what other websites have done in the past to scare off abusers and trolls, and it worked.

Name the other websites that are similar in function and size to YouTube.

I don’t think you realize the world we live in. Bots will fly through those hoops with ease, and continue uploading to 5 separate account. When one gets taken down 5 more will replace it.

YouTube is like it’s own internet in itself. It is so huge, there is no such thing as having control over YouTube, any more than there is such a thing as having control over the internet. Things will just keep popping up all over.

Bottom line. Anything under the ‘kids safe’ banner should be checked before it becomes visible. I’ve banned my kids from Youtube unless I see what they are watching . It’s so incredibly unreliable, and the ‘suggestions’ that appear are just insane!
There’s no price trade off IMO. When (most of us parents) were kids the content we viewed was controlled.
What’s changed in society that makes a 5 yr old seeing Elsa and Spiderman touch each other inappropriately suddenly ok?

Name the other websites that are similar in function and size to YouTube.

This such a loaded challenge, I wouldn’t even begin to know how to address it.

But I have to start somewhere. You are challenging me to name other sites similar to YouTube, to back up your assertion that what worked for other sites won’t work for it, because it’s become "too big to manage."

The problem with your challenge is that it’s based on a very cleverly crafted lie that Big Tech came up with to mislead everyone about its current issue with problem users. So if I say, "No, I can’t name other sites as big as YouTube," you will be seeing it as a confirmation that it’s true that YouTube has become top big to manage and is completely helpless in keeping problem users from wreaking havoc on the site. But it wouldn’t be confirmation of a "truth." It would just be me confirming your belief in Big Tech’s lie.

What is the lie? YouTube and Big Tech are misrepresenting why they have so many problem users. They are trying to paint the picture that they’ve grown so large that they can’t vet or police everyone. But in truth, the opposite is the case. They’ve grown so large because they refuse to vet or police anyone.

I’ll give you an analogy. Say there are two extremely popular nightclubs that each have the capacity to hold a maximum of 200 people a night. Club A has signs of club rules on every wall, has bouncers everywhere, has a doorman blocking suspicious people from entering and a zero tolerance policy towards drug use or violence. It sees, at the most, 1200 patrons a week because it regularly turns away or kicks out 200-300 people. But if it wanted, it could have up to 1400 patrons a week easily, maybe double that.

Club B has an open door policy and no rules. Anyone can walk in—no matter how suspicious. Anyone can bring in whatever they want. There are no club rules posted anywhere, and no one gets thrown out for bad behavior except in the most egregious cases (rape, stabbing, etc.). It sees 1800-2200 patrons a week (meaning, it allows itself to go over capacity) and it turns away no one and bans no one.

Club A has maybe a handful of incidents a year, the worst being two drunken women who got into a cat fight and started pulling each other’s hair. Club B has brawls, sexual assault, stabbings and OD’s on a monthly basis.

Why is Club B having so many problems? Is it because it has "too many patrons" to manage its club effectively? Or is it just plain fact that it implemented a lax policy that allowed any Tom, Dick or Harry to enter and do whatever they wanted? Are the "too many patrons" proof that Club B has grown too large to manage? Or are the ’too many patrons" the result of it not actively turning away or banning problem patrons?

A superb post. Thank you.

View All Comments
Back to top ↑