Skip to main content

Unity developed a video game designed to test AI players

Unity developed a video game designed to test AI players


The Obstacle Tower challenge is meant to be a new benchmark for AI researchers

Share this story

Image: Unity

Unity, a leading maker of game development tools, announced today that it’s created a new, unprecedented type of video game that’s designed not to be played by humans, but by artificial intelligence. The game is called Obstacle Tower, and it’s a piece of software that’s created to judge the level of sophistication of an AI agent by measuring how efficiently it can maneuver up to 100 levels that change and scale in difficulty in unpredictable ways. Each level is procedurally generated, so it changes every time the AI attempts it.

With Obstacle Tower and a $100,000 pool of prizes set aside for participants to claim as part of a contest, Unity hopes it can provide AI researchers with a new benchmarking tool to evaluate self-learning software. “We wanted to give the researchers something to really work with that would to an extreme degree challenge the abilities of the AI systems that are currently in research and development around the world,” Danny Lange, Unity’s vice president of AI and machine learning, told The Verge. “What we really want to do here is create a tool for researchers to focus their work on and unite around and compare progress.”

Unity wants to create a new benchmark to spur AI researchers to compete

Video games are among the most useful training tools for AI researchers because of the vast amount of critical thinking, problem-solving, and path planning required to play and succeed at even simple arcade titles. And for years, the one game that proved to be an especially challenging obstacle for AI agents, and therefore a solid benchmark against which to measure an AI’s abilities, was the 1984 Atari classic Montezuma’s Revenge. The game, unlike most others of its time period, provided few concrete feedback mechanisms for players. Instead, it rewarded exploration and puzzle-solving as opposed to fast reflexes and precise aiming. That made it especially difficult for researchers to train AI software to learn as it played the game.

Yet, AI agents are rapidly improving thanks to novel approaches to machine learning, which Unity cites as a motivator to create Obstacle Tower. In November of last year, AI lab OpenAI published research showing how a unique approach to the technique known as reinforcement learning, in which an AI is given a reward mechanism and cycled through sometimes hundreds of years of accelerated play time, that was tailored to reward curiosity yielded record performance in Montezuma’s Revenge.

Image: Unity

Reinforcement learning is how Google’s DeepMind trained software to beat the world’s best players at Go and, as of last week, even StarCraft II. But the technique is only traditionally effective at certain games where the parameters can be tightly controlled and the goals set for the AI agents are clear, concise, and free of potential distractions. For Montezuma’s Revenge, OpenAI incentivized its algorithm to explore the game by essentially giving it a secret to find in the game’s first level, which encourages the agent to speedily traverse more of the environment than it would have otherwise.

In the case of Obstacle Tower, Unity is taking a similar approach in design, though it’s adding in procedurally generated levels that also change in physical design as the AI progresses. The game is essentially a modern take on Montezuma’s Revenge. It mixes platforming and puzzle-solving that will have players searching for keys and avoiding enemies and spike pits, so Lange says that it should be an effective test of AI expertise in areas like computer vision, virtual locomotion, and planning. It’s also in 3D and in third person, which will require AI agents exercise a more sophisticated level of spatial awareness as they move around the levels.

“There’s a wide range of control problems, visual problems, and cognitive problems that you have to overcome to progress from level to level, and every level it gets harder,” Lange says. “We’ve had human players play and they can get to around level 15.” Unity plans to make Obstacle Tower open source, so game developers and researchers can modify it as they see fit. You’ll also be able to download it and try it yourself, in the event you’re interested in testing a game that was never intended to be played by a human.

‘Obstacle Tower’ is a modern-day ‘Montezuma’s Revenge’ designed for computers to play

As part of the contest it’s hosting around the game, Unity says any participant can train an AI agent to scale the first 25 floors of the tower between February 11th and March 31st. Starting on April 15th, the full 100-floor game will be available, with winners announced on June 14th. Unity says it will be giving out cash prizes as well as travel vouchers and credits for Google’s Cloud Platform. It’s unclear exactly how the contest will reward researchers, be it by overall performance or the first team to develop an agent that can beat 100 floors, but Unity plans to release more information about the contest in the coming weeks.

The ultimate goal is that these types of new, specially tailored pieces of software will help create smarter AI agents that can learn more complex skills at ever-accelerating rates. Learning to play a video game won’t be applicable to most real-world tasks we’ll have a robot perform in the future; chances are, we won’t want robots trying and failing to vacuum your carpet or fry an egg thousands of times until it gets it right. (Although we may very well have the robot’s software practice those tasks using virtual simulation.) And only by training deep neural networks on massive data sets geared toward a singular and narrow purpose — such as recognizing objects in photos — can companies like Google turn advances in AI research into actual features we may use in today’s commercial products.

But by training AI to play video games without any instruction whatsoever, researchers are gaining a better understanding of how the mind solves problems and, more importantly, how it learns to solve new ones it’s never encountered before. These types of challenges, like Unity’s Obstacle Tower, could provide researchers new avenues to keep working at those challenges, with an eventual milestone of creating what the industry refers to as artificial general intelligence, or AI software that can perform any task a human can.

“A lot of people think that AI is about building better product recommendations at Amazon. But at the end of the day, it’s really solving way more complex problems. It’s about dealing with vision, control, and other cognitive challenges,” Lange says. For Unity as a company, he adds that this type of work is also about helping establish its game development toolset as a place where cutting-edge research can, down the line, translate to industry advances. “We have as a mission to democratize game development, but we also want to democratize AI. We want to make sure that a lot of developers out there can get their hands on it.”