Skip to main content

Why the AI industry could stand to slow down a little

Why the AI industry could stand to slow down a little

/

This year has given us a bounty of innovations. We could use some time to absorb them.

Share this story

A trippy graphic displaying a collection of items like paintbrushes, books, phone messages, and a notepad to represent generative AI. A large pair of eyes and hands can be seen at the center of the image.
Haein Jeong / The Verge

I.

What a difference four months can make.

If you had asked in November how I thought AI systems were progressing, I might have shrugged. Sure, by then OpenAI had released DALL-E, and I found myself enthralled with the creative possibilities it presented. On the whole, though, after years watching the big platforms hype up artificial intelligence, few products on the market seemed to live up to the more grandiose visions that have been described for us over the years.

Then OpenAI released ChatGPT, the chatbot that captivated the world with its generative possibilities. Microsoft’s GPT-powered Bing browser, Anthropic’s Claude, and Google’s Bard followed in quick succession. AI-powered tools are quickly working their way into other Microsoft products and more are coming to Google’s.

At the same time, as we inch closer to a world of ubiquitous synthetic media, some danger signs are appearing. Over the weekend, an image of Pope Francis that showed him in an exquisite white puffer coat went viral — and I was among those who was fooled into believing it was real. The founder of open-source intelligence site Bellingcat was banned from Midjourney after using it to create and distribute some eerily plausible images of Donald Trump getting arrested. (The company has since disabled free trials following an influx of new signups.)

A group of prominent technologists is now asking makers of these tools to slow down

Synthetic text is rapidly making its way into the workflows of students, copywriters, and anyone else engaged in knowledge work; this week BuzzFeed became the latest publisher to begin experimenting with AI-written posts.

At the same time, tech platforms are cutting members of their AI ethics teams. A large language model created by Meta leaked and was posted to 4chan, and soon someone figured out how to get it running on a laptop.

Elsewhere, OpenAI released plug-ins for GPT-4, allowing the language model to access APIs and interface more directly with the internet, sparking fears that it would create unpredictable new avenues for harm. (I asked OpenAI about that one directly; the company didn’t respond to me.)

It is against the backdrop of this maelstrom that a group of prominent technologists is now asking makers of these tools to slow down. Here’s Cade Metz and Gregory Schmidt at the New York Times:

More than 1,000 technology leaders and researchers, including Elon Musk, have urged artificial intelligence labs to pause development of the most advanced systems, warning in an open letter that A.I. tools present “profound risks to society and humanity.”

A.I. developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control,” according to the letter, which the nonprofit Future of Life Institute released on Wednesday.

Others who signed the letter include Steve Wozniak, a co-founder of Apple; Andrew Yang, an entrepreneur and a 2020 presidential candidate; and Rachel Bronson, the president of the Bulletin of the Atomic Scientists, which sets the Doomsday Clock.

If nothing else, the letter strikes me as a milestone in the march of existential AI dread toward mainstream awareness. Critics and academics have been warning about the dangers posed by these technologies for years. But as recently as last fall, few people playing around with DALL-E or Midjourney worried about “an out-of-control race to develop and deploy ever more digital minds.” And yet here we are.

There are some worthwhile critiques of the technologists’ letter. Emily M. Bender, a professor of linguistics at the University of Washington and AI critic, called it a “hot mess,” arguing in part that doomer-ism like this winds up benefiting AI companies by making them seem much more powerful than they are. (See also Max Read on that subject.)

In an embarrassment for a group nominally worried about AI-powered deception, a number of the people initially presented as signatories to the letter turned out not to have signed it. And Forbes noted that the institute that organized the letter campaign is primarily funded by Musk, who has AI ambitions of his own.

The pace of change in AI does feel as if it could soon overtake our collective ability to process it

There are also arguments that speed should not be our primary concern here. Last month Ezra Klein argued that our real focus should be on these system’s business models. The fear is that ad-supported AI systems prove to be more powerful at manipulating our behavior than we are currently contemplating — and that will be dangerous no matter how fast or slow we choose to go here. “Society is going to have to figure out what it’s comfortable having A.I. doing, and what A.I. should not be permitted to try, before it is too late to make those decisions,” Klein wrote.

These are good and necessary criticisms. And yet whatever flaws we might identify in the open letter — I apply a pretty steep discount to anything Musk in particular has to say these days — in the end I’m persuaded of their collective argument. The pace of change in AI does feel as if it could soon overtake our collective ability to process it. And the change signatories are asking for — a brief pause in the development of language models larger than the ones that have already been released — feels like a minor request in the grand scheme of things.

Tech coverage tends to focus on innovation and the immediate disruptions that stem from it. It’s typically less adept at thinking through how new technologies might cause society-level change. And yet the potential for AI to dramatically affect the job market, the information environment, cybersecurity and geopolitics — to name just four concerns — should gives us all reason to think bigger.

II.

Aviv Ovadya, who studies the information environment and whose work I have covered here before, served on a red team for OpenAI prior to the launch of GPT-4. Red-teaming is essentially a role-playing exercise in which participants act as adversaries to a system in order to identify its weak points. The GPT-4 red team discovered that if left unchecked, the language model would do all sorts of things we wish it wouldn’t, like hire an unwitting TaskRabbit to solve a CAPTCHA. OpenAI was then able to fix that and other issues before releasing the model.

In a new piece in Wired, though, Ovadya argues that red-teaming alone isn’t sufficient. It’s not enough to know what material the model spits out, he writes. We also need to know what effect the model’s release might have on society at large. How will it affect schools, or journalism, or military operations? Ovadya proposes that experts in these fields be brought in prior to a model’s release to help build resilience in public goods and institutions, and to see whether the tool itself might be modified to defend against misuse.

Ovadya calls this process “violet teaming”:

You can think of this as a sort of judo. General-purpose AI systems are a vast new form of power being unleashed on the world, and that power can harm our public goods. Just as judo redirects the power of an attacker in order to neutralize them, violet teaming aims to redirect the power unleashed by AI systems in order to defend those public goods.

In practice, executing violet teaming might involve a sort of “resilience incubator”: pairing grounded experts in institutions and public goods with people and organizations who can quickly develop new products using the (prerelease) AI models to help mitigate those risks

If adopted by companies like OpenAI and Google, either voluntarily or at the insistence of a new federal agency, violet teaming could better prepare us for how more powerful models will affect the world around us.

At best, though, violet teams would only be part of the regulation we need here. There are so many basic issues we have to work through. Should models as big as GPT-4 be allowed to run on laptops? Should we limit the degree to which these models can access the wider internet, the way OpenAI’s plug-ins now do? Will a current government agency regulate these technologies, or do we need to create a new one? If so, how quickly can we do that?

The speed of the internet often works against us

I don’t think you have to have fallen for AI hype to believe that we will need an answer to these questions — if not now, then soon. It will take time for our sclerotic government to come up with answers. And if the technology continues to advance faster than the government’s ability to understand it, we will likely regret letting it accelerate.

Either way, the next several months will let us observe the real-world effects of GPT-4 and its rivals, and help us understand how and where we should act. But the knowledge that no larger models will be released during that time would, I think, give comfort to those who believe AI could be as harmful as some believe.

If I took one lesson away from covering the backlash to social media, it’s that the speed of the internet often works against us. Lies travel faster than anyone can moderate them; hate speech inspires violence more quickly than tempers can be calmed. Putting brakes on social media posts as they go viral, or annotating them with extra context, have made those networks more resilient to bad actors who would otherwise use them for harm.

I don’t know if AI will ultimately wreak the havoc that some alarmists are now predicting. But I believe those harms are more likely to come to pass if the industry keeps moving at full speed.

Slowing down the release of larger language models isn’t a complete answer to the problems ahead. But it could give us a chance to develop one.