Skip to main content

OpenAI can’t tell if something was written by AI after all

OpenAI can’t tell if something was written by AI after all


OpenAI shuts down a tool meant to detect AI-written text due to low accuracy.

Share this story

An image of OpenAI’s logo, which looks like a stylized and symmetrical braid.
Image: OpenAI

OpenAI shuttered a tool that was supposed to tell human writing from AI due to a low accuracy rate. In an (updated) blog, OpenAI said it decided to end its AI classifier as of July 20th. “We are working to incorporate feedback and are currently researching more effective provenance techniques for text,” the company said.

As it shuts down the tool to catch AI-generated writing, OpenAI said it plans to “develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated.” There’s no word yet on what those mechanisms might be, though.

OpenAI fully admitted the classifier was never very good at catching AI-generated text and warned that it could spit out false positives, aka human-written text tagged as AI-generated. OpenAI, before it added its update shutting down the tool, said the classifier could get better with more data. 

After OpenAI’s ChatGPT burst into the scene and became one of the fastest-growing apps ever, people scrambled to grasp the technology. Several sectors raised the alarm around AI-generated text and art, particularly educators who were worried students would no longer study and just let ChatGPT write their homework. New York schools even banned access to ChatGPT on school grounds amid concerns about accuracy, safety, and cheating.

Misinformation via AI has also been a concern, with studies showing AI-generated text, like tweets, might be more convincing than ones written by humans. Governments haven’t yet figured out how to rein in AI and, thus far, are leaving it to individual groups and organizations to set their own rules and develop their own protective measures to handle the onslaught of computer-generated text. And it seems that for now, no one, not even the company that helped kickstart the generative AI craze in the first place, has answers on how to deal with it all. Though some people get caught, it’s only going to get harder to easily differentiate AI and human work.

OpenAI also recently lost its trust and safety leader amid a time when the Federal Trade Commission is investigating OpenAI to see how it vets information and data. OpenAI declined to comment beyond its blog post.