Skip to main content

How artificial intelligence will revolutionize the way video games are developed and played

The advances of modern AI research could bring unprecedented benefits to game development

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

If you asked video game fans what an idealized, not-yet-possible piece of interactive entertainment might look like in 10 or even 20 years from now, they might describe something eerily similar to the software featured in Orson Scott Card’s sci-fi classic Ender’s Game. In his novel, Card imagined a military-grade simulation anchored by an advanced, inscrutable artificial intelligence.

The Mind Game, as it’s called, is designed primarily to gauge the psychological state of young recruits, and it often presents its players with impossible situations to test their mental fortitude in the face of inescapable defeat. Yet the game is also endlessly procedural, generating environments and situations on the fly, and allows players to perform any action in a virtual world that they could in the real one. Going even further, it responds to the emotional and psychological state of its players, adapting and responding to human behavior and evolving over time. At one point, The Mind Game even draws upon a player’s memories to generate entire game worlds tailored to Ender’s past.

Researchers are just beginning to experiment with blending modern AI and video games

Putting aside the more morbid military applications of Card’s fantasy game (and the fact that the software ultimately develops sentience), The Mind Game is a solid starting point for a conversation about the future of video games and artificial intelligence. Why are games, and the AI used to both aid in creating them and drive the actions of virtual characters, not even remotely this sophisticated? And what tools or technologies do developers still require to reach this hypothetical fusion of AI and simulated reality?

These are questions researchers and game designers are just now starting to tackle as recent advances in the field of AI begin to move from experimental labs and into playable products and usable development tools. Until now, the kind of self-learning AI — namely the deep learning subset of the broader machine learning revolution — that’s led to advances in self-driving cars, computer vision, and natural language processing hasn’t really bled over into commercial game development. That’s despite the fact that some of these advancements in AI are thanks in part to software that’s improved itself through the act of playing video games, such as DeepMind’s unbeatable AlphaGo program and OpenAI’s Dota 2 bot that’s now capable of beating pro-level players.

But there exists a point on the horizon at which game developers could gain access to these tools and began to create immersive and intelligent games that utilize what today is considered cutting-edge AI research. The result would be development tools that automate the building of sophisticated games that can change and respond to player feedback, and in-game characters that can evolve the more you spend time with them. It sounds like fiction, but it’s closer to reality than we might think.

An image showing a repeating pattern of brain illustrations
Illustration by Alex Castro / The Verge

To better understand how AI might become more intertwined with video games in the future, it’s important to know the two fields’ shared history. Since the earliest days of the medium, game developers have been programming software both to pretend like it’s a human and to help create virtual worlds without a human designer needing to build every inch of those worlds from scratch.

From the software controlling a Pong paddle or a Pac-Man ghost to the universe-constructing algorithms of the space exploration title Elite, which helped pioneer the concept of procedural generation in games, developers have been employing AI in unique and interesting ways for decades. Conversely, Alan Turing, a founding father of AI, developed a chess-playing algorithm before a computer even existed to run it on.

But at a certain point, the requirements and end goals of game developers became largely satisfied by the kind of AI that we today would not think of as all that intelligent. Consider the difference between, say, the goombas you face off against in the original Super Mario Bros. and a particularly difficult, nightmarish boss in From Software’s action RPG Dark Souls 3. Or the procedural level design of the 1980 game Rogue and 2017’s hit dungeon crawler Dead Cells, which made ample use of the same technique to vary its level design every time you play. Under the hood, the delta between those old classics and the newer titles is not as dramatic as it seems.

Game AI has remained static because the underlying techniques haven’t radically changed

What makes Dark Souls so hard is that its bosses can move with unforgiving speed and precision, and because they are programmed to anticipate common human mistakes. But most enemy AI can still be memorized, adapted to, and overcome by even an average human player. (Only in very narrow domains, like chess, can AI typically brute force its way to a sure victory.) And even the procedural generated universes of a game as vast and complex as Hello Games’ No Man’s Sky are still created using well-established mathematics and programming laid down by games like Rogue, Elite, and others after it.

The lack of large, noticeable leaps is because the underlying AI governing how those virtual entities behave — and the AI powering procedural generation tools — has not undergone radical change over the years. “Two of the core components of commercial game AI are pathfinding and finite state machines,” explains Julian Togelius, an associate professor at New York University’s department of computer science and engineering who specializes in the intersection of AI and video games. “Pathfinding is how to get from point A to point B in a simple way, and it’s used in all the games all the time. A finite state machine is a construct where an [non-playable character] can be in different states and move between them.”

Togelius says that modern games are using variations of these techniques — as well as more advanced approaches like the Monte Carlo tree search and what are known as decision and behavior trees — that are more sophisticated than they were in the early ‘80s and ‘90s. But a majority of developers are still operating off the same fundamental concepts and employing them at bigger scales and with the benefits of more processing power. “Of course, AI in commercial games is more complex than that, but those are some of the founding principles that you’ll see versions of all over,” he says.

google deepmind alphago go

Now, there’s a stark difference between the kind of AI you might interact with in a commercial video game and the kind of AI that is designed to play a game at superhuman levels. For instance, the most basic chess-playing application can handily beat a human being at the classic board game, just as IBM’s DeepBlue system bested Russian grandmaster Garry Kasparov back in 1997. And that type of AI research has only accelerated in recent years.

At Google-owned lab DeepMind, Facebook’s AI research division, and other AI outfits around the world, researchers are hard at work teaching software how to play ever-more sophisticated video games. That includes everything from the Chinese board game Go to classic Atari games to titles as advanced as Valve’s Dota 2, a competitive five-versus-five strategy contest that dominates the world’s professional gaming circuits.

The goal of most AI research involving games is to benchmark the software’s sophistication

The goal there is not to develop AI that will create more interesting, dynamic, and realistic game experiences; AI researchers are largely using games as a way to benchmark the intelligence level of a piece of software and because virtual worlds, with strict rule and reward systems, are a particularly useful environment to train software in. The hope is that by teaching this software to play games, human researchers can understand how to train machines to perform more complicated tasks in the future.

“First and foremost, the mission at DeepMind is to build an artificial general intelligence,” Oriol Vinyals, co-lead on the Google-owned AI lab’s StarCraft 2 project, said earlier this year, referring to the quest to build an AI agent that can perform any mental task a human being can. “To do so, it’s important to benchmark how our agents perform on a wide variety of tasks.”

A graphical representation of AlphaStar’s processing. The system sees whole map from the top down and predicts what behavior will lead to victory. 
A graphical representation of AlphaStar’s processing. The system sees whole map from the top down and predicts what behavior will lead to victory. 
Image: DeepMind

It’s precisely this kind of AI, and the other advances similarly achieved in teaching software how to recognize objects in photos and translate text into different languages, that game developers have largely avoided. But there’s a good reason why most games, even the most recent big-budget titles using the most sophisticated design tools and technologies, don’t employ that type of cutting-edge AI. That’s because true, self-learning software would likely make most games unplayable, either because the act of playing the game would be too wildly unpredictable or because the AI would behave in a way that could make telling a story or creating a satisfying feedback loop for players near-impossible.  

“Game developers tend to prioritize the kinds of actions that we can predict. Even though it’s very interesting when AI does unpredictable things, it’s not necessarily super fun for players,” explains Tanya Short, a game designer and co-founder of the indie studio KitFox Games. “So, unless the game is built around the unpredictability of the non-player characters, the AI doesn’t necessarily serve a great function when it’s allowed to run off on its own.”

“The AI doesn’t necessarily serve a great function when it’s allowed to run off on its own.”

Short says that most AI in games is the equivalent of “smoke and mirrors” — just sophisticated enough to make you think you’re interacting with something intelligent, but controlled and predictable enough to keep everything from going off the rails. “You can prioritize the raw computing power or the solution-oriented thinking of the machine or things like that,” she adds, “But in games we don’t value that almost at all. It’s nice for [research] papers, but what game designers actually want is for players to have a good experience.”

Togelius makes a similar point, stressing that machine learning-trained AI applications, outside the most narrow commercial applications like predictive text and image search, are simply too unpredictable to be useable in a video game at the moment. Imagine a virtual world where every character remembered you as a jerk or a criminal and acted with hostility, or a non-playable character central to a game’s storyline that never ends up performing the necessary action to reach the next level or embark on a pivotal quest.

“Typically when you design the game, you want to design an experience for the player. You want to know what the player will experience when he gets to that point in the game. And for that, if you’re going to put an AI there, you want the AI to be predictable,” Togelius says. “Now if you had deep neural networks and evolutionary computation in there, it might come up with something you had never expected. And that is a problem for a designer.” The result is that AI in games has remained relatively “anemic,” he adds.

Illustration by Alex Castro / The Verge

Another good reason why AI in games is not all that sophisticated is because it hasn’t traditionally needed to be. Mike Cook, a Royal Academy of Engineering research fellow at Queen Mary University of London, says that game developers became especially adept at using traditional techniques to achieve the illusion of intelligence — and that achieving that illusion has been the point.

“[Game developers] got really good at being efficient with technology. They realized that they couldn’t create perfectly intelligent creatures. They’ve realized that they couldn’t solve all of these problems. So they figured out how to juice what they could do,” Cook says. “They’d get the maximum out of it.”

Cook points to landmark first-person shooter games, like Bungie’s Halo franchise and Monolith Productions’ 2006 paranormal horror title F.E.A.R., that used AI in influential ways. The games didn’t use software that was more sophisticated than contemporary titles of the time; rather, the developers succeeded at tricking players into thinking they were facing off against intelligent agents by having enemies broadcast their intentions.

In Halo, enemies would shriek the word “grenade” to one another before tossing in an explosive from behind cover, while the smaller, grunt-type foes would instruct their squads to flee when you took out the larger elite soldiers. In F.E.A.R., enemies would verbalize the path planning algorithms that controlled their behavior, but the developers dressed it up as an element of realism. Soldiers would shout to a fellow enemy to tell them when to flank, while others would call for backup if you were especially proficient at taking them down.

“The best AI [in games] is the AI you don’t notice. It’s the AI that seems spookily accurate at certain times or strangely omniscient. But not too omniscient, because then you’ll notice it’s definitely an AI,” Short says. She also points to Halo and F.E.A.R. as games that helped pioneer this concept of using lightweight AI to broadcast the software’s inner thoughts. “All they did was add this voice clip and suddenly people think, ‘Oh, it makes sense. They’re throwing a grenade. And I guess that’s tactical.’ It was an example of the AI getting no more sophisticated, but completely reversing what people felt that they were observing. And that is the heart of game design.”

Today, the most boundary-pushing game design doesn’t revolve around using modern AI, but rather creating complex systems that result in unexpected consequences when those systems collide, or what designers have come to call emergent gameplay. Take, for instance, Rockstar’s hyper-realistic Western game Red Dead Redemption 2, which lets players interact with non-playable characters in myriad, complex ways that elicit different reactions depending on everything from the hat you’re wearing to whether your clothes have blood stains on them. One notable viral clip, in which a player fires a warning shot into the sky only to inadvertently shoot a bird, enshrines Rockstar’s approach of creating a world so complex and believable that events can happen to one player that will never be experienced by anyone else.

Most boundary-pushing game design seeks to create realistic, convincing illusions

Another game that’s particularly adept at this is The Legend of Zelda: Breath of the Wild, which doesn’t use groundbreaking AI, but did create a cohesive open world with strict rules around everything from gravity and inertia to cooking and even the laws of thermodynamics. The result was a world with rules that could be bent in astonishing ways — equip a flaming sword so as to prevent cold weather from docking your health points, for instance — so long as you were crafty enough to figure out how its systems could build off one another. Similarly, the ever-evolving and iconic ASCII art simulation game Dwarf Fortress uses a dizzying number of clever systems, from procedurally generated erosion levels to varying mood states and alcohol proclivities of the dwarf inhabitants, to create unique and bizarre situations that its developers never explicitly designed for.

This kind of AI, which aims to build a sense of realism but doesn’t result in game-breaking outcomes, is the kind of immersive world-building that most developers are trying to achieve, regardless of how intelligent the pieces really are. “I think there’s a sense of building trust between the player and the game to make them believe in it. And I don’t think that’s cheap and I don’t think that’s bad. I actually think it’s really good because you’re basically asking the player to engage with the world,” Cook says. “You’re asking them to become an actor, to believe in what’s going on, which I think is really cool. I think it’s a great part of game designing and game writing.”

Again, the goal historically has not been to try and achieve some unprecedented level of human-like intelligence, but to create an experience or a world that engages and stimulates players in ways only the real world used to be capable of. “When we talk about DeepMind [software], we talk about how did it learn — how much data, how many CPUs,” Cook says. “But that’s only 50 percent of AI. The other 50 percent of AI is psychology. It’s how people react to machines and technology and how they perceive them. And actually, a lot of game AI ended up digging deep into that.”

A series of wireframe faces.
Illustration by Alex Castro / Th

So what would, honest-to-goodness self-learning software look like in the context of video games? We’re a ways away from something as sophisticated as Orson Scott Card’s The Mind Game. But there is progress being made particularly around using AI to create art for games and in using AI to push procedural generation and automated game design to new heights.

“What we’re seeing now is the technological side of AI catching up and giving [developers] new abilities and new things that they can actually put into practice in their games, which is very exciting,” Cook says. As part of his research, Cook has been building a system he calls Angelina that designs games entirely from scratch, some of which he even made available for free on indie game marketplace Itch.io.

This type of experimentation with unpredictable AI in games is restricted mostly to academics and indie developers, Cook notes. But it’s that kind of work — away from the commercial pressures of big studios and publicly-traded game publishers — that is now laying the groundwork for true, AI-powered gaming experiences, ones that are purposefully designed around the ever-evolving nature of neural networks and machine learning-powered systems.

Cook sees a future in which AI becomes a kind of collaborator with humans, helping designers and developers create art assets, design levels, and even build entire games from the ground up. “I think you’re going to see tools that allow you to sit down and just make a game almost without thinking,” he says. “As you work, the system is recommending stuff to you. This doesn’t matter whether you’re an expert game designer or a novice. It will be suggesting rules that you can change, or levels that you can design.” Cook likens it to predictive text, such as Google’s machine learning-powered Smart Compose feature in Gmail, but for game design.

“You’re going to see tools that allow you to sit down and just make a game almost without thinking.”

The result of such tools would be that smaller teams could make much bigger and more sophisticated games. Additionally, larger studios could push the envelope when it comes to crafting open-world environments and creating simulations and systems that come closer to achieving the complexity of the real world. “So yes on the hand it will be much easier to make games. We could probably make bigger games. You’ll see these open world games will become much larger,” Cook says. “But I think one thing that I think we’ll see in particular is games where the rules systems are mutable and the rules are not the same every time you play them. They’re not even the same between you and your friend’s computer.”

It’s this kind of adaptable, evolving game design that could become the future of procedural generation. “I think that to me is the really exciting part of automated game design is the games aren’t finished designing until you stop playing them,” Cook says. He even imagines something similar to The Mind Game, where software could use self-provided personal information to create a game set in your hometown, or featuring characters based on your friends or family.  

Togelius says, in the near term, AI will likely help developers test games before they’re released, with companies being able to rely on AI agents to playtest software at accelerated rates to discover bugs and iron out kinks in the gameplay. He also sees machine learning and other techniques as indispensable data-mining tools for in-game analytics, so game studios can study player behavior and decipher new insights to improve a game over time.  

He also points to remarkable progress in the area known as generative adversarial networks, or GANs, which are a type of machine learning method that uses a pair of AIs and mounds of data to try and accurately replicate patterns until the fakes are indistinguishable from the originals.

The result of GAN research is astounding progress in developing unique human faces that pass for real people and generating game graphics that looks close to live video footage. “Currently you have character editors in games where you choose how big a nose you want, what exact skin tone you want and what hair you want and so on,” Togelius says. “These things are going to get a whole lot more advanced using generative methods in the future.”

Some of Nvidia’s AI-generated faces using the generative adversarial network method.
Some of Nvidia’s AI-generated faces using the generative adversarial network method.
Image: Karras, Laine, Aila

Of course, the holy grail would be a true AI-powered in-game character, or an overarching game-designing AI system, that could change and grow and react as a human would as you play. It’s easy to speculate about how immersive, or dystopian, that might be, whether it resembles The Mind Game or something like the foul-mouthed, sentient alien character filmmaker and artist David O’Reilly created for the sci-fi movie Her.

But Togelius says handing control over to intelligent software systems could radically shift how we think about the very nature of games. “Creating AI that can actually be a game master is something that is really fascinating. Many people have had this vision for some while that you have an AI that not just serves your game but changes your game to suit you,” Togelius says. “So you can say the game plays the player as much as the player plays the game.”

Yet perhaps the most exciting element in that vision of the future is not just that a piece of software has taken on a creative role in the artistic process of building games, but also that this type of technology could create tailored experiences that are ever-changing and never grow old.

“When you think about the first time you played your favorite game, you only get that experience once. There’s no way to replicate that feeling. You can go back as an expert, but you can’t go back as a novice,” Cook says. “But automated game design lets you have that experience many times over because this game can be constantly redesigning itself and refreshing itself. It’s not just like a new kind of game. It’s also a whole new concept for playing games — a whole new concept for play in general, which is really cool.”

The Verge on YouTube /

Exclusive first looks at new tech, reviews, and shows like Better Worlds.

Subscribe!