Skip to main content

On the internet, nobody knows you’re a human

On the internet, nobody knows you’re a human

/

As bots, avatars, and AI get more and more human, how do creators prove they’re the real deal?

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

A stylized illustration of a woman’s face, looking exasperated as she’s surrounded by CAPTCHA-style popups.
Illustration by Brian Scagnelli / The Verge

Last April, 27-year-old Nicole posted a TikTok video about feeling burned out in her career. When she checked the comments the next day, however, a different conversation was going down. 

“Jeez, this is not a real human,” one commenter wrote. “I’m scared.” 

“No legit she’s AI,” another said. 

Nicole, who lives in Germany, has alopecia. It’s a condition that can result in hair loss across a person’s body. Because of this, she’s used to people looking at her strangely, trying to figure out what’s “off,” she says over a video call. “But I’ve never had this conclusion made, that [I] must be CGI or whatever.”

Over the past few years, AI tools and CGI creations have gotten better and better at pretending to be human. Bing’s new chatbot is falling in love, and influencers like CodeMiko and Lil Miquela ask us to treat a spectrum of digital characters like real people. But as the tools to impersonate humanity get ever more lifelike, human creators online are sometimes finding themselves in an unusual spot: being asked to prove that they’re real.

Almost every day, a person is asked to prove their own humanity to a computer

Almost every day, a person is asked to prove their own humanity to a computer. In 1997, researchers at the information technology company Sanctum invented an early version of what we now know as “CAPTCHA” as a way to distinguish between automatic computerized action and human action. The acronym, later coined by researchers at Carnegie Mellon University and IBM in 2003, is a stand-in for the somewhat bulky “Completely Automated Public Turing test to tell Computers and Humans Apart.” CAPTCHAs are employed to prevent bots from doing things like signing up for email addresses en masse, invading commerce websites, or infiltrating online polls. They require every user to identify a series of obscured letters or sometimes simply check a box: “I am not a robot.”

This relatively benign practice takes on a new significance in 2023 when the rise of OpenAI tools like DALL-E and ChatGPT amazed and spooked their users. These tools can produce complex visual art and churn out legible essays with the help of just a few human-supplied keywords. ChatGPT boasts 30 million users and roughly 5 million visits a day, according to The New York Times. Companies like Microsoft and Google scrambled to announce their own competitors

It’s no wonder, then, that AI paranoia from humans is at an all-time high. Those accounts that just DM you “hi” on Twitter? Bots. That person who liked every Instagram picture you posted in the last two years? A bot. A profile you keep running into on every dating app no matter how many times yous swipe left? Probably also a bot.

More so than ever before, we’re not sure if we can trust what we see on the internet

The accusation that someone is a “bot” has become something of a witch hunt among social media users, used to discredit those they disagree with by insisting their viewpoint or behavior is not legitimate enough to have real support. For instance, supporters on both sides of the Johnny Depp and Amber Heard trial claimed that online support for the other was at least somewhat made up of bot accounts. More so than ever before, we’re not sure if we can trust what we see on the internet — and real people are bearing the brunt. 

For Danisha Carter, a TikToker who shares social commentary, speculation about whether or not she was a human started when she had just 10,000 TikTok followers. Viewers started asking if she was an android, accusing her of giving off “AI vibes,” and even asking her to film herself doing a CAPTCHA. “I thought it was kind of cool,” she admitted over a video call.

“I have a very curated and specific aesthetic,” she says. This includes using the same framing for every video and often the same clothes and hairstyle. Danisha also tries to stay measured and objective in her commentary, which similarly makes viewers suspicious. “Most people’s TikTok videos are casual. They’re not curated, they’re full body shots, or at least you see them moving around and engaging in activities that aren’t just sitting in front of the camera.”

After she first went viral, Nicole attempted to respond to her accusers by explaining her alopecia and pointing out human qualities like her tan lines from wearing wigs. The commenters weren’t buying it. 

“People would come with whole theories in the comments, [they] would say, ‘Hey, check out this second of this. You can totally see the video glitching,” she says. “Or ‘you can see her glitching.’ And it was so funny because I would go there and watch it and be like, ‘What the hell are you talking about?’ Because I know I’m real.”

The more people use computers to prove they’re human, the smarter computers get at mimicking them

But there’s no way for Nicole to prove it because how does one prove their own humanity? While AI tools have accelerated exponentially, our best method for proving someone is who they say they are is still something rudimentary, like when a celebrity posts a photo with a handwritten sign for a Reddit AMA — or, wait, is that them, or is it just a deepfake? 

While developers like OpenAI itself have released “classifier” tools for detecting if a piece of text was written by an AI, any advance in CAPTCHA tools has a fatal flaw: the more people use computers to prove they’re human, the smarter computers get at mimicking them. Every time a person takes a CAPTCHA test, they’re contributing a piece of data the computer can use to teach itself to do the same thing. By 2014, Google found that an AI could solve the most complicated CAPTCHAs with 99 percent accuracy. Humans? Just 33 percent.

So engineers threw out text in favor of images, instead asking humans to identify real-world objects in a series of pictures. You might be able to guess what happened next: computers learned how to identify real-world objects in a series of pictures. 

We’re now in an era of omnipresent CAPTCHA called “No CAPTCHA reCAPTCHA” that’s instead an invisible test that runs in the background of participating websites and determines our humanity based on our own behavior — something, eventually, computers will outsmart, too. 

Melanie Mitchell, a scientist, professor, and author of Artificial Intelligence: A Guide for Thinking Humans, characterizes the relationship between CAPTCHA and AI as a never-ending “arms race.” Rather than hope for one be-all, end-all online Turing test, Mitchell says this push-and-pull is just going to be a fact of life. False bot accusations against humans will become commonplace — more than just a peculiar online predicament but a real-life problem. 

“Imagine if you’re a high school student and you turn in your paper and the teacher says, ‘The AI detector said this was written by an AI system. Fail,’” Mitchell says. “It’s almost an insolvable problem just using technology alone. So I think there’s gonna have to be some kind of legal, social regulation of these [AI tools].” 

These murky technological waters are exactly why Danisha is pleased her followers are so skeptical. She now plays into the paranoia and makes the uncanny nature of her videos part of her brand.

“It’s really important that people are looking at profiles like mine and saying, ‘Is this real?’” she says. “‘If this isn’t real, who’s coding it? Who’s making it? What incentives do they have?’” 

Or maybe that’s just what the AI called Danisha wants you to think.