Pornhub just removed most of its videos

Photo: Pornhub

Pornhub is removing all videos uploaded to its site by unverified users, millions of videos in total, as part of a crackdown on user-uploaded content after two major payment processors suspended service. The decision, first reported by Motherboard, stems from a New York Times report that found the site was hosting videos of people who are underage and videos showing children being assaulted.

The site announced last Tuesday that it would begin limiting uploads to verified users only. Uploads now have to come from official content partners or members of Pornhub’s “Model Program,” which requires age verification to sign up. Motherboard reports that all previously uploaded videos are now being pulled “pending verification and review” beginning in 2021.

“This means every piece of Pornhub content is from verified uploaders, a requirement that platforms like Facebook, Instagram, TikTok, YouTube, Snapchat and Twitter have yet to institute,” Pornhub wrote in a blog post this morning. A spokesperson said the verification policy would apply to all sites owned by MindGeek, Pornhub’s parent company. The spokesperson didn’t respond to a follow-up about whether those other sites, which include YouPorn and Redtube, would also remove videos from unverified users.

Pornhub appears to have wiped out more than 10 million videos as of this writing. Motherboard said the site boasted 13.5 million videos on Sunday night; it has 2.9 million as of Monday morning. While the site hosts professional videos, its main function operates much like YouTube — allowing users to upload videos of their own and make money off of ad revenue. These videos represented the bulk of Pornhub’s content.

Following the Times’ report last week, Visa and Mastercard said they would investigate whether the site was hosting illegal content. On Thursday, both companies cut off service, preventing customers from making purchases on Pornhub through two of the most popular payment methods available. The suspension of service could pose a significant problem for Pornhub, which also sells videos, and for the sex workers who use the platform’s sales as a source of income.

Update December 14th, 10:59AM ET: Added a statement saying Pornhub’s policy will apply to other MindGeek-owned websites and updated the number of videos removed.

Comments

I felt a great disturbance in the Force, as if millions of voices suddenly cried out in terror…
JK, good on them to actually go through with this. I hope this leads to a more widespread adoption to verify users or traceability. People need to be accountable for what they do online, just as they are offline. Especially where it is very easy to ruin someones life with a simple unwanted upload

A Federal law requiring all people uploading or sharing content online to verify their identity (on each website or through a universal ID system, that other nations could adopt at some point) is something I never thought of but it would be a move I think the government could actually get away with & implement. Leveraging the child pornography threat would be a solid way to get even many of the staunch privacy conservatives in government on board. You’d theoretically would still be able to be anonymous (y’all still won’t know who FORESEE is) but the government would be able to easily identify who anyone is in the case of needing to trace the source of specific content. I mean, that’s kinda scary on many levels, but I can foresee it happening.

I think verifying identity might be a good tool (but not the only tool) for decreasing fake news and garbage on social media. They could maybe go partway, so something like requiring a verified identity and real name for content shared with more than 200 people (or some well thought out metric). Any content just posted and not shared could still be anonymous. Basically, anything that makes money or influences public opinion should have a real name attached to it.

Kind of a scary slippery slope when we can see now what happens when governments go dark side. We were one or two more Trump type cycles away from losing free press all together.

Yea a law would have to be written that considers loopholes. Which would make it a bit harder to approve but I could see it occurring. There is no free speech within the confines of private companies websites so unless they want to obliterate that right of companies to regulate what type of content & speak they allow on their site (which will open up a potentially worse Pandora’s Box of misinformation that would impossible to curtail) the more likely scenario is a universal ID system for accountability purposes with regards to illegal content. Allowing certain companies who refuse to use it the option to take complete liability for whatever users post.

for decreasing fake news and garbage on social media.

Whether that is good or bad kind of depends on who is enforcing it, no?

Well, fake news in most instances isn’t a crime. So that’d still be purely up to the platforms to regulate. The real meat and potatoes are those uploading illegal content or plotting illegal acts.

That would be a terrible idea. It would make it completely impossible for anyone to post video anonymously. You might think that that’s a good thing but what happens when someone posts a video critical of the government (not talking about PH here) and the government uses their subpoena powers to force the company to give up the person’s ID and then retaliates against them? Look up the third-party doctrine. Basically the government can reach out to a third party that has your data (like any number of companies) and force them to give it up. Or you say something critical of a company and now they can file a lawsuit and easily subpoena the information from whoever holds it.

I also think such a bill would be dead in the water since courts support the right of people to be anonymous in their speech, especially political speech. Requiring someone to have a license and get identified to post speech online would certainly run afoul of the First Amendment. It would certainly get immediately challenged in court. The only way I could see it working is if it’s managed by an independent third party and the law has specific protections to prevent the government and third parties from running straight to whoever verified it.

I think you could get around it with a bit of tailoring: Sites that allow non-verified uploads can be held liable for displaying child porn or revenge porn or non-consensual acts.

Sites that allow non-verified uploads can be held liable for displaying child porn or revenge porn or non-consensual acts.

Site are ALREADY liable for display child porn and verified non-consensual acts. Technically, I think it also also illegal to show revenge porn, but there are ton of hoops that the victim needs to do jump through to verify a bit of content is revenge porn. The key thing was site are given a little bit of leeway in how fast they need to respond to takedown requests. Child porn has to be takedown immediately I think, but the other categories require some verification of that content is illegal before it can be taken down and then it’s a game of whackmole since it’s very easy multiple users to upload illegal content faster than a website can take it down.

What I’m suggesting is to make it specifically illegal not to respond slowly to a take-down, but to make it illegal to be available to be viewed at all if it’s coming from a non-verified account.

What I’m suggesting is to make it specifically illegal not to respond slowly to a take-down, but to make it illegal to be available to be viewed at all if it’s coming from a non-verified account.

If I read your post correct you are suggesting that the solution is to simply outlaw content posts by account that aren’t linked to someone’s verified real ID.

I am not a lawyer, but I think a US law forcing website to demand linking accounts to a national ID before they can post breaks the First Amendment and would eventually be thrown out.

That being said, I would personally not be against it if website decided to self-enforce a rule that all "anonymous" accounts require video participants to display a "consent sign" at the beginning of any video (kind of like the clapper used in film productions).

For better or worse though, semi-anonymous posting is the defacto standard for 99.9999% of the internet and I think that genie is out of the bottle.

It’d be more of an accountability law that if businesses didn’t implement they’d be held responsible for any illegal content content shared on their platforms. They’d want to adopt the universal ID system to relinquish that accountability & not have to spend more money and effort on intense content moderation.

I totally agree that sites would want to relinquish accountability and not have to deal with content moderation as it is a really difficult problem to solve.

However, I don’t think they can relinquish accountability without the government stepping in and passing a law doing so, but the moment the government does that… it is effectively controlling moderation and that brings of legal questions around censorship (in contrast a company is generally free to moderate its content with limited legal repercussions… although there might be political/financial repercussions).

It’s like a game of hot potato… website would only be too happy to pass the buck to someone else if they could (most large websites like Facebook basically already does that by hiring a third party contractor to handle moderation, Youtube has tried to use ML-algorithms to limited success and I think still relies on a third-party moderator conrtactor firm).

The main issue is the it’s hard for laws to differentiate between people doing an honest effort and are being overwhelmed and bad actors not moderating in good faith. Twitch goes surprised when RIAA bulk filed millions of takedowns on years, millions of hours of old videos with a time limit. A lot of them were probably not legitimate… but Twitch couldn’t review them fast enough and ended up having to do the nuclear option of simply deleting all of them to comply. I feel that Pornhub was less honest in its moderation and ended up doing something similar… but externally it’s hard to legally differentiate between the two (I think… maybe there is a way).

There’s no free speech with regards to illegal content. Twitch banning people for behavior is different from anything I’m suggesting. You’re speaking on the everyday moderation of people based on actions that in many cases aren’t illegal. I’m speaking on actions that are distinctly illegal, like child pornography, acts of violence, physical abuse of another individual/creature, murder, threats of violence. Things that are unquestionably illegal.

A universal ID system allows the ability for companies to pass the buck in terms of accountability for illegal content (as long as they are making reasonable efforts to moderate) vs being fully accountable for illegal content posted on their sites.

Someone saying they hate Trump or hate a specific group of people without blatant threats to the lives or livelihoods of that group isn’t illegal, for example. That’s free speech, but a company has the right to have their own code of conduct with regards to if they allow that kind of speak on their site, as free speech doesn’t exist within the confines of a business. Twitch in that scenario, could still ban someone for that but it’d have no relation or impact with regards to some universal ID.

There’s no free speech with regards to illegal content.

I believe you suggested pre-emptive restriction of free speech – that is mandating by law to no one can post content unless their share their personal identifying information first.

make it illegal to be available to be viewed at all if it’s coming from a non-verified account.

While this is the norm in China in order to have access to Weibo (Chinese Twitter) I believe that is not allowed in the US from the semester of "Censorship and Freedom" course I took in college.

I’m speaking on actions that are distinctly illegal, like … acts of violence, physical abuse of another individual/creature, murder, threats of violence. Things that are unquestionably illegal.

AFAIK, I don’t this type of content being left on the site is not that common percentage-wise. When identified, it typically is taken down relatively quick (e.g. Christchurch shooting). From that article that I’ve read on Facebook content moderator getting PTSD, I’m pretty sure 99% of illegal acts are taken down even without a national ID system, it’s just that people are put it back up even faster. As I mentioned Google and Facebook have tried to automate this a bit with algorithms, but with limited success as the PR backlash for overmoderation hurts them as well. What you are suggesting is that website put up higher standards for registration… and they can do that privately in theory, but obviously that would limit that number of people who join their sites and that affects them financially. You and I might prefer it this way… but we can’t change that and for better or worse, the US government isn’t allowed to make a law regarding that either afaik.

Someone saying they hate Trump or hate a specific group of people without blatant threats to the lives or livelihoods of that group isn’t illegal, for example. That’s free speech, but a company has the right to have their own code of conduct with regards to if they allow that kind of speak on their site, as free speech doesn’t exist within the confines of a business.

Sure I agree with that. I thought I made that clear in my previous post. Perhap you can elaborate what you are trying to say. My point is that I really do wish the companies were more restrictive in the type of content that they moderate/ban… but the law doesn’t forbid them and at the moment they have a financial/PR incentive to be very light on moderation – only stopping the very worse content.

View All Comments
Back to top ↑