Skip to main content

ChatGPT proves AI is finally mainstream — and things are only going to get weirder

Researchers talk about the ‘capability overhang,’ or hidden skills and dangers, of artificial intelligence. As the technology goes mainstream, we’re going to discover a lot of new things about them.

Share this story

A close-up image of a silicon mono-crystal on a neutral background. Silicon is a crucial component in AI.
A close-up image of a silicon mono-crystal. Silicon is a crucial component in AI.
Image: Catherine Breslin / Better Images of AI / Silicon Closeup / CC-BY 4.0

A friend of mine texted me earlier this week to ask what I thought of ChatGPT. I wasn’t surprised he was curious. He knows I write about AI and is the sort of guy who keeps up with whatever’s trending online. We chatted a bit, and I asked him: “and what do you think of ChatGPT?” To which he replied: “Well, I wrote a half-decent Excel macro with it this morning that saved me a few hours at work” — and my jaw dropped.

For context: this is someone whose job involves a fair bit of futzing around with databases but who I wouldn’t describe as particularly tech-minded. He works in higher education, studied English at university, and never formally learned to code. But here he was, not only playing around with an experimental AI chatbot but using it to do his job faster after only a few days’ access.

“I asked it some questions, asked it some more, put it into Excel, then did some debugging,” is how he described the process. “It wasn’t perfect but it was easier than Googling.”

Tools like ChatGPT have made AI publicly accessible like never before

Stories like this have been accumulating this week like the first spots of rain gathering before a downpour. Across social media, people have been sharing stories about using ChatGPT to write code, draft blog posts, compose college essays, compile work reports, and even improve their chat-up game (okay, that last one was definitely done as a joke, but the prospect of AI-augmented rizz is still tantalizing). As a reporter who covers this space, it’s been basically impossible to keep up with everything that’s happening, but there is one overarching trend that’s stuck out: AI is going mainstream, and we’re only just beginning to see the effect this will have on the world.

There’s a concept in AI that I’m particularly fond of that I think helps explain what’s happening. It’s called “capability overhang” and refers to the hidden capacities of AI: skills and aptitudes latent within systems that researchers haven’t even begun to investigate yet. You might have heard before that AI models are “black boxes” — that they’re so huge and complex that we don’t fully understand how they operate or come to specific conclusions. This is broadly true and is what creates this overhang.

“Today’s models are far more capable than we think, and our techniques available for exploring [them] are very juvenile,” is how AI policy expert Jack Clark described the concept in a recent edition of his newsletter. “What about all the capabilities we don’t know about because we haven’t thought to test for them?”

Capability overhang is a technical term, but it also perfectly describes what’s happening right now as AI enters the public domain. For years, researchers have been on a tear, pumping out new models faster than they can be commercialized. But in 2022, a glut of new apps and programs have suddenly made these skills available to a general audience, and in 2023, as we continue scaling this new territory, things will start changing — fast.

The bottleneck has always been accessibility, as ChatGPT demonstrates. The bones of this program are not entirely new (it’s based on GPT-3.5, a large language model that was released by OpenAI this year but which itself is an upgrade to GPT-3, from 2020). OpenAI has previously sold access to GPT-3 as an API, but the company’s ability to improve the model’s ability to talk in natural dialogue and then publish it on the web for anyone to play with brought it to a much bigger audience. And no matter how imaginative AI researchers are in probing a model’s skills and weaknesses, they’ll never be able to match the mass and chaotic intelligence of the internet at large. All of a sudden, the overhang is accessible.

The same dynamic can also be seen in the rise of AI image generators. Again, these systems have been in development for years, but access was restricted in various ways. This year, though, systems like Midjourney and Stable Diffusion allowed anyone to use the technology for free, and suddenly AI art is everywhere. Much of this is due to Stable Diffusion, which offers an open-source license for companies to build on. In fact, it’s an open secret in the AI world that whenever a company launches some new AI image feature, there’s a decent chance it’s just a repackaged version of Stable Diffusion. This includes everything from viral “magic avatar” app Lensa to Canva’s AI text-to-image tool to MyHeritage’s “AI Time Machine.” It’s all the same tech underneath.

As the metaphor suggests, though, the prospect of a capability overhang isn’t necessarily good news. As well as hidden and emerging capabilities, there are hidden and emerging threats. And these dangers, like our new skills, are almost too numerous to name. How, for example, will colleges adapt to the proliferation of AI-written essays? Will the creative industries be decimated by the spread of generative AI? Is machine learning going to create a tsunami of spam that will ruin the web forever? And what about the inability of AI language models to distinguish fact from fiction or the proven biases of AI image generators that sexualize women and people of color? Some of these problems are known; others are ignored, and still, more are only just beginning to be noticed. As the excitement of 2022 fizzles out, it’s certain that 2023 will contain some rude awakenings.

Welcome to the AI overhang. Hold on tight.