clock menu more-arrow no yes

Filed under:

An institute studying ‘existential risk’ has made a Civilization mod about superintelligent AI

New, 4 comments

The Cambridge University-based group says the add-on is part outreach, part research

Whether or not you think superintelligent AI poses a credible threat to humanity in the near future, you have to admit it’s a problem at least worth thinking about — or hey, maybe even playing a game about it. That’s why Cambridge University’s Centre for the Study of Existential Risk (CSER) has released a mod for popular strategy title Civilization V that’s all about mitigating the threat from superintelligent AI.

CSER isn’t usually known for its gaming products. The research group was founded in 2012, and is dedicated to exploring various global catastrophes capable of collapsing civilization or wiping out humanity altogether — otherwise known as “existential threats.” These include superintelligent AI (a computer program that becomes much, much more clever than humans and decides we are somehow superfluous to its needs), but also things like runaway climate change and bioengineered pandemics.

Speaking to The Verge, CSER’s Shahar Avin, a postdoc researcher who managed the Civilization project, says the intent in creating the mod was part educational, and part research. “We had the idea in the center that we wanted to do outreach for the idea of superintelligence — to get people with the right skillset interested, grow the field of people who care about AI safety, and test our own ideas,” Avin says. (See also: the fabulous text-based game, Universal Paperclips.)

IBM’s Deep Blue computer appears as a wonder in the game, giving players a boost to their AI research.
Image: CSER / STEAM

The result is a pair of mods for Civilization V and its DLC Brave New World that replace the game’s usual science-based victory condition (achieved by launching a spaceship to Earth’s nearest star, Alpha Centauri) with one dedicated to building superintelligent AI. There are new buildings like AI research labs, a new wonder (Deep Blue, the computer that defeated Garry Kasparov at chess in 1996), and a new mechanism called AI risk.

Avin explains: as players research artificial intelligence in order to build a superintelligent AI and win the game, a global counter named “AI risk” slowly ticks up. If this counter fills, players are told that somewhere in the world a rogue AI was created and everyone instantly loses. “It captures the essence of our research,” says Avin. “You play through this long arc of history, but it all ends if you don’t manage your technology right.”

Superintelligence has long been a concern for a small number of AI experts, as well as some more vocal public figures. (We’re looking at you, Elon Musk.) Surveys in the field reveal there is mixed opinion on whether or not malicious AI is a threat even in the long-term future. But there is consensus that our current AI tools are too crude to replicate the intelligence of, say, a smart rat — let alone something cleverer than a human. The more pressing risks are things like algorithmic bias and AI-powered surveillance, technology that is already being built into societies around the world with little or no forethought.

CSER’s own Civilization mod does offer players one reliable counter to a malicious superintelligence: more research. Players can keep the global rogue AI counter low by dedicating resources to building AI safety labs in their cities, and paying for city states to build them, too. “If you choose to go down the AI path, you need to make sure you have more safe AI research than rogue AI research,” says Avin. “Investment in AI safety is in some sense altruistic, and we try to replicate that.”

Although Civilization V is not a serious research tool, Avin says playing the game prompted a number of insights. He says: “Something that struck me as surprising was if the geopolitical situation is very messy — let’s say you’re stuck between two aggressive civilizations. It becomes very difficult to manage AI risk, because your resources are devoted to fighting wars.”

There seems to be a straightforward lesson here for the real world: when facing threats on a global scale, we need a global response, and fights between nations only make this harder. “I think that’s the same problem we’re seeing with climate change,” says Avin. “There are these problems where you need altruism and strong international cooperation. Things that seem short-term now could end up having a very significant effect in the future.”