Google vs. Go: can AI beat the ultimate board game?

Has AlphaGo solved one of the oldest problems in computer science?

12

The champion entered the room to the machine-gun clicking of camera shutters. He locked eyes with his opponent’s entourage, one of whom was one of the most prominent billionaires on the planet and had just made lofty pronouncements about the importance of the upcoming clash. He looked out at the assembled audience and, in a soft but firm voice, explained why he was confident of vanquishing his rival and securing the $1 million in prize money. His reputation, as well as that of his discipline and his entire species, is at stake.

The champion’s name is Lee Se-dol, and tomorrow he will play a board game against a computer program from Google.

It’s not just any board game. This is Go, an ancient, abstract game that originated in China nearly 3,000 years ago, and a game of such staggering depth, nuance, and complexity that it’s long been considered impossible for computers to master.

But Google thinks it may have finally cracked Go. Its DeepMind artificial intelligence unit has been developing a deep-learning program called AlphaGo, which has already taken down Fan Hui, the European Go champion, in a series of five matches last October. Now it faces Lee Se-dol, a 33-year-old South Korean considered the game’s greatest 21st century practitioner, earning millions of dollars a year through his prowess. If AlphaGo prevails in this week’s series, it will be a significant milestone in the history of AI with broad implications for the future of the technology, emulating IBM’s historic victory with its Deep Blue computer over chess grandmaster Garry Kasparov almost 20 years ago.

“The winner here, no matter what happens, is humanity,” said Alphabet chairman Eric Schmidt on stage in Seoul this morning. That’s unlikely to take the pressure off.

demis-hassabis-lee-sedol-eric-schmidt-go-deepmind-Sam Byford-01

DeepMind approached Lee Se-dol to be AlphaGo’s next opponent for a simple reason: there can be no better way to test and stretch the system’s limits. Nicknamed "Ssen-dol" ("strong stone") at home, where he’s as much national hero as celebrity, Lee became a pro Go player at the age of 12 and won the first of his 18 world titles five years later.

"Fan Hui was a strong opponent, but he’s nowhere close to Lee Se-dol’s level," International Go Federation secretary general Lee Ha-jin — no relation — told me last week. Today, she described Lee Se-dol’s style as "intuitive, unpredictable, creative, intensive, wild, complicated, deep, quick, chaotic" in a presentation, which is not the kind of language you tend to associate with players of abstract board games. Or computers, for that matter.

"There is a beauty to the game of Go and I don’t think machines understand that beauty."

Lee Se-dol says he was surprised to receive AlphaGo’s challenge, as he hadn’t expected AI to reach the level of top players for another decade or so. "There is a beauty to the game of Go and I don’t think machines understand that beauty," he says. But he was instantly curious, and he took less than five minutes to accept the invitation.

On stage today, Lee Se-dol expressed confidence that he’d beat AlphaGo. "I believe human intuition is too advanced for AI to have caught up yet," he said, but noted that since learning more about AlphaGo’s capabilities he’s gotten slightly more nervous and is no longer convinced he’ll win the series 5-0. I don’t make mistakes often," he said, "but if I make any single mistake as a human being I might lose."

google-deepmind-alphago-go-Sam Byford-01

Even a 4-1 victory for Lee Se-dol would represent a major achievement for DeepMind, a British company acquired by Google for a reported $400 million in 2014. The unit’s ultimate mission is no less than to "solve intelligence," with potential uses ranging from healthcare to robotics, but attaining the long-sought computer science dream of a world-beating Go program would catapult DeepMind to the forefront of AI research.

Go is "probably the most elegant game that humanity has invented," says DeepMind founder Demis Hassabis, a former child prodigy in chess himself. "There are more possible Go positions than there are atoms in the universe." Hassabis notes that while chess has an average of around 20 possible moves for a given position, Go gives the player about ten times as many options, resulting in a massively higher branching factor that is far harder for any AI to deal with.

Appropriately enough given DeepMind’s parent company, the solution requires a more efficient search algorithm. But this isn’t much use for Go without an improved ability to evaluate the game itself, which is the biggest challenge for computers — it’s much harder for them to work out who’s winning than it is with a game like chess, for example. AlphaGo is powered by two deep neural networks that guide its machine learning and search techniques, arriving at the best move by narrowing and shortening the tree diagram of possibilities. And while the program initially learned to play Go by being fed data from historical real-world matches, it’s since been trained further by playing thousands of matches against itself, continually reinforcing its ability. "I think it’s very impressive," says Murray Campbell, an IBM research scientist who was one of the principal creators of Deep Blue. "They’ve clearly advanced the state of the art."

That’s how AlphaGo was able to shock the Go community by beating Fan Hui last October, the first time any computer program had ever beaten a pro player without using a handicap. "I expected Fan Hui to win," said Toby Manning, referee of that match and treasurer of the British Go Association. "I was really surprised at how well AlphaGo did."

go-board-game-Sam Byford-01

Starting tomorrow, we’ll find out just how much DeepMind’s ingenuity and AlphaGo’s self-training has paid off. The five matches take place on Wednesday March 9th, Thursday March 10th, Saturday March 12th, Sunday March 13th, and finally Tuesday March 15th; all start at 1pm KST (11pm ET the previous night) and will be streamed live on YouTube. The Verge will be reporting on the series direct from Seoul, exploring various aspects of Go, artificial intelligence, and machine learning throughout the matches. You can keep track of our coverage in this dedicated hub.

But what’s going to happen? Really, the one thing I know is that no one knows for sure, whether they’re hardcore Go players or deep in the world of artificial intelligence. "AlphaGo learnt a lot from studying professional play, but it’s very difficult to get better than professional," says Manning. "It’s difficult to get better than your teacher just by listening to what your teacher tells you. Personally I suspect that AlphaGo will not beat Lee Se-dol. I expect it’ll be 4-1 or 3-2 in Lee Se-dol’s favour. But I’ve been wrong before!"

"From one data point it’s hard for me to tell," says IBM’s Campbell. "Lee Se-dol is clearly a couple of classes better at least than the European champion. If I have to put my neck out? I’d say that the computer will win but it will be close."

lee-sedol-se-dol-go-Sam Byford-01

"On one hand, I believe that Lee Se-dol has an advantage of experience," says the University of Alberta’s Jonathan Schaeffer, a computer scientist who wrote Chinook, the first software program to solve checkers. "Lee knows a lot about Go. AlphaGo has only been learning for less than a year. It is possible that there are gaps in AlphaGo that are not yet apparent — they have not had enough time to learn well how to play all the scenarios that might arise in a game. On the other hand, never underestimate technology."

"I’ve never seen or imagined a computer playing to Lee Se-dol's level."

"I actually have no clue but I’m very excited to see the game because AlphaGo plays to the level of its opponent," says Lee Ha-jin. "When it plays Lee Se-dol it means it has to play to the Lee Se-dol level, and I’ve never seen or imagined a computer playing to that level."

Maybe Eric Schmidt struck the right note, then, when he said that any result will be a victory for humanity. And Hassabis is quick to note that DeepMind has grander ambitions than games, planning to turn its attention to real-world problems after AlphaGo demonstrates "how flexible and powerful learning algorithms can be." But with the result still very much in question, all eyes will be on Seoul tomorrow to find out if one more game has fallen to the machines.

Loading comments...