Skip to main content

OpenAI forms new team to assess ‘catastrophic risks’ of AI

OpenAI’s new preparedness team will address the potential dangers associated with AI, including nuclear threats.

OpenAI’s new preparedness team will address the potential dangers associated with AI, including nuclear threats.

A rendition of OpenAI’s logo, which looks like a stylized whirlpool.
A rendition of OpenAI’s logo, which looks like a stylized whirlpool.
Illustration: The Verge
Emma Roth
is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.

OpenAI is forming a new team to mitigate the “catastrophic risks” associated with AI. In an update on Thursday, OpenAI says the preparedness team will “track, evaluate, forecast, and protect” against potentially major issues caused by AI, including nuclear threats.

The team will also work to mitigate “chemical, biological, and radiological threats,” as well as “autonomous replication,” or the act of an AI replicating itself. Some other risks that the preparedness team will address include AI’s ability to trick humans, as well as cybersecurity threats.

Related

“We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity,” OpenAI writes in the update. “But they also pose increasingly severe risks.”

Aleksander Madry, who is currently on leave from his role as the director of MIT’s Center for Deployable Machine Learning, will lead the preparedness team. OpenAI notes that the preparedness team will also develop and maintain a “risk-informed development policy,” which will outline what the company is doing to evaluate and monitor AI models.

OpenAI CEO Sam Altman has warned of the potential for catastrophic events caused by AI before. In May, Altman and other prominent AI researchers issued a 22-word statement that “mitigating the risk of extinction from AI should be a global priority.” During an interview in London, Altman also suggested that governments should treat AI “as seriously” as nuclear weapons.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.