Skip to main content

Porn: you know it when you see it, but can a computer?

Training an artificial intelligence to recognize nudity is more difficult than you think

Illustration by Alex Castro

Share this story

Early last month, Tumblr announced that it would ban porn. When the new content policy went into effect around two weeks later, on December 17th, it was immediately obvious that there would be problems. As soon as it was deployed, the AI system Tumblr had chosen to oversee its first wave of moderation began erroneously flagging innocent posts across the site’s 455.4 million blogs and 168.2 billion posts: vases, witches, fishes, and everything in between.

While it’s not clear what automated filter Tumblr was using, or whether it had created its own — the company did not respond to a request for comment for this story — it’s evident that the social network had been caught flat-footed in both its policies and its technology. The site’s inconsistent stance on “female-presenting nipples” and artistic nudity, for example, are context-specific decisions that show Tumblr isn’t even sure what it wants to ban from its platform. How does a private company define what it considers obscene?

It’s hard to block risqué content in the first place because it’s difficult enough to decide what it is. Defining obscenity is a bear trap that dates back to around 1896 when the United States first adopted laws regulating obscenity. In 1964’s Jacobellis v. Ohio, a court case about whether Ohio could ban the showing of a famed Louis Malle film, the Supreme Court produced what’s probably the most famous line on hardcore pornography today: “I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so,” said Justice Potter Stewart in his concurring opinion. “But I know it when I see it, and the motion picture involved in this case is not that.”

How does a private company define what it considers obscene?

Machine learning algorithms have the same problem. It’s one that Brian DeLorge, CEO of Picnix, a company that sells customized AI technology, is trying to solve. One of their products, Iris, is a client-side application meant specifically to detect pornography in order “to help folks,” as DeLorge says, “who don’t want porn in their life.” He pointed out to me that the other problem is that porn can be so many different things — and images that aren’t porn share features with images that are. A picture of a party on the beach could be blocked not because it shows more skin than a photograph of an office, but because it’s borderline. “That’s why it is very difficult to train an image-recognition algorithm to be a broadly speaking silver bullet of a solution,” DeLorge says. “Really when the definition becomes hard for humans, that’s when machine learning also has difficulty.” If people can’t agree on what is or isn’t porn, can a computer ever hope to learn the difference?

To teach an AI how to detect porn, the first thing you have to do is feed it porn. Lots and lots of porn. Where do they get it? “One of the things people do is they just download a bunch of stuff from Pornhub, XVideos,” says Dan Shapiro, co-founder and CTO of Lemay.ai, a startup that creates AI filters for its customers. “It’s one of those kind of legal gray areas where, like, if you’re training on other people’s content, does it belong to you?”

After you’ve got a training data set from your favorite porn site, the next step is to rip out all the frames from the videos that aren’t explicitly porn to make sure that the frames you’re using “are not, like, a guy holding a pizza box.” Platforms pay people in places mostly outside of the US to label that content; it’s often low-wage and repetitive, and it’s the same kind of work that you do every time you complete a CAPTCHA. “They’ll just go through and go like ‘this is this kind of porn,’ ‘this is that kind of porn.’ You can filter it down a little bit, just because porn has so many good tags on it already,” he says. Training tends to go better when you use a big data set that’s representative of the stuff you specifically don’t want to see, which isn’t just explicit photos.

“A lot of time, you’re not just filtering for porn, you’re filtering for stuff that’s porn adjacent,” Shapiro says. “Like these fake profiles that people put up that are like a picture of a girl, and then a phone number to call.” Here he’s referring to sex workers looking for clients, but it could easily be anything else that’s questionably legal. “That’s not porn, but it’s stuff you don’t want on your platform, right?” A good automated moderator is trained on millions — if not tens of millions — of explicit pieces of content, which means a good deal of human effort has gone into the model.

“This is very analogous to how a child and an adult are different,” says Matt Zeiler, CEO and founder of Clarifai, a computer vision startup that does this kind of image filtering for corporate clients. “I can say this for a fact — we just had a baby couple months ago. They don’t know anything about the world, everything’s new.” You have to show the baby / algorithm so much for them to learn anything. “You need millions and millions of examples, but an adult — now that we’ve built up so much context about the world and understand how it works — we can learn something new with just a couple examples,” he says. (To repeat: training an AI to filter adult content is like showing a baby a ton of porn.) Today, the AI filter companies like Clarifai are grown up. They have a good amount of base knowledge about the world, which is to say they know what dogs look like, what cats are, what is and is not a tree, and, for the most part, what is and is not nudity. Zeiler’s company uses its models to train new ones for its customers — because the original model has processed more data, the customized versions only need new training data from the client to get up and running.

Training an AI to filter adult content is like showing a baby a ton of porn

Still, it’s hard for an algorithm to get everything right. With content that’s clearly pornographic, they work really well; but a classifier might incorrectly flag an underwear ad as explicit because there’s more skin in the picture than there is in, say, an office. (Bikinis and lingerie are, as Zeiler tells me, difficult.) Which means the people doing the labeling have to focus on those edge cases in their work, prioritizing what the model finds difficult to categorize. One of the hardest?

“Anime porn,” says Zeiler. “The first version of our nudity detector was not trained on any cartoon pornography.” A lot of the time the AI would fail because it didn’t recognize hentai for what it was. “And so once we worked for that customer, we got a bunch of their data incorporated into the model and it drastically improves the accuracy on the cartoons while preserving the accuracy on a real photo,” Zeiler says. “You don’t know what your users are going to do.”

The tech used to sniff out porn can be used to detect other things, too. The technology underlying the systems are remarkably flexible. It’s bigger than anime boobs. Perspective, from Alphabet’s Jigsaw — formerly Google Ideas, the company’s moonshot maker — is in wide use as an automated comment moderator for newspapers. Dan Keyserling, the head of communications for Jigsaw, told me that before Perspective, The New York Times only had comments open on about 10 percent of their pieces because there’s a limit to how much their human moderators could process in a day. He claims Jigsaw’s product has allowed that number to triple. The software works similarly to the image classifiers, except that it sorts for toxicity — which they define as the likelihood someone will leave a conversation based on a comment — instead of nudity. (Toxicity is just as tricky to identify in text comments as pornography is in images.) Facebook uses the same kind of automated filtering to identify suicidal posts and content related to terrorism, and it has attempted to use the technology to spot fake news on its massive platform.

The whole thing still relies on human oversight to function; we’re better with ambiguity and discerning context. Zeiler tells me that he doesn’t think his product has put anyone out of work. It’s intended to solve the “scale problem,” as he puts it, of the internet. A wedding blog Clarifai used to work with used its product to automate content moderation, and the human editors who’d formerly been in charge of approving images were moved to working on more qualitative tagging tasks. That’s not to underplay the real human cost of automation: people have to train the AIs, and sorting through content and tagging it so artificial intelligence can discern what is and isn’t relevant can cause PTSD. Seeing some of the worst images and videos humans can come up with is a brutal job.

This, though, is the future of moderation: individual, off-the-shelf solutions provided by companies who make it their entire business to train ever better classifiers on more and more data. In the same way that Stripe and Square offer readymade payment solutions for businesses that don’t want to process them internally, and Amazon Web Services (AWS) has established itself as the place where sites are hosted, startups like Zeiler’s Clarifai, DeLorge’s Picnix, and Shapiro’s Lemay.ai are vying to be the one-stop solution to content moderation online. Clarifai already has software development kits for iOS and Android, and Zeiler says they’re working on getting their product running on Internet of Things-connected devices (think security cameras), but really, he means on every device that has either an AI-optimized chip or just has enough processing resources.

The whole thing still relies on human oversight to function

Dan Shapiro, of Lemay.ai, is hopeful. “As with any technology, it’s not finished being invented yet,” he says. “So I don’t think it’s super reasonable to go like, well, I’m dissatisfied with one deployment for one company. I guess we give up and go home.” But will they ever be good enough to act truly autonomously without human oversight? That’s murkier. “There’s [not] a little person in a box that filters every image,” he says. “You need training data from somewhere,” which means there’s always going to be a human element involved. “It’s a good thing because it’s moderating people.”

Zeiler, on the other hand, thinks there will be a day that artificial intelligence will moderate everything on its own. “Eventually, the amount of human intervention needed, it’s going to be either next to nothing or nothing for moderating nudity,” he says. “And I think a lot of human effort is going to shift into things that AI can’t do today, like high-level reasoning, and, you know, self-awareness, stuff like that that humans do have.”

Recognizing porn is part of that. Identifying it is a relatively trivial task for people, but it’s much more difficult to train an algorithm to recognize nuance. Figuring out the threshold for when a filter flags an image as pornographic or not pornographic is also difficult, and mathematically governed. The function is called the precision-recall curve, and it describes the relationship of what the filter returns as relevant, but a human chooses its sensitivity.

The point of an artificial intelligence, as Alison Adam put it in her 1998 book Artificial Knowing: Gender and the Thinking Machine, is to “model some aspect of human intelligence,” whether that’s learning, moving around and interacting in space, reasoning, or using language. Artificial intelligence is an imperfect mirror of how we see the world in the same way that porn is a reflection of what happens between people when they’re alone together: there’s a kind of truth in it, and it isn’t the whole picture.