Earlier this week, Google made history when its DeepMind-designed AI beat Go world champion Lee Se-dol — not just once but twice. Still not sure what Go is and why Google built an artificial intelligence program to beat it? Don't worry, we've got you covered.
What is Go?
Go is a board game where two players compete to control the most territory on the game board.
How is it played?
It's played on a 19-by-19 grid with flat, round pieces called "stones." One player uses black stones. The other uses white. Black and White take turns placing their stones on empty intersections on the grid. Opponents spend the game trying to surround or border empty intersections on the board with their stones.
How do you win?
The game ends when all open spaces on the board have been surrounded. Players receive points for the number of spaces they've surrounded. A player can also "capture" her opponent's stones by surrounding them with her own. These captured stones are subtracted from that opponent's score at the game's end. The player with the most points wins.
How long have people been playing Go?
Go originated in ancient China around 3,000 years ago and eventually spread to Japan and Korea. In China, it was considered one of the four essential arts required of a scholarly gentleman. In Japan, it was revered among warlords, and the Shogun oversaw competitions between elite players. The game became a popular tradition in nations across the globe, celebrated for its simple rules that give way to immense complexity through gameplay. This complexity might be why Go became popular among some of the West's most notable minds. Albert Einstein was known to have played Go while at Princeton. And Alan Turing, the father of computer science, introduced the game to Dr. Jack Good at Bletchley Park while they worked on the Enigma Machine.
Why was Go chosen for the Google DeepMind showdown?
There are a handful of classic board games that have been used as benchmarks for progress in the field of artificial intelligence: tic-tac-toe, checkers, Othello, chess, and now Go. These games don't conceal information from the players, unlike poker or Battleship, and have no elements of chances, like backgammon or Monopoly. Go is the last of these games in which humans have the advantage over computers. Or, it was. Designing a program that can do this is generally considered to be a significant achievement for the field.
Why not just use the program that beat the world chess champion back in the ’90s?
That chess program was awesome, but it's really only good at chess. For many years, designing a program that could beat Go was believed to be out of reach for computer scientists. IBM's Deep Blue, the program that beat world chess champion Garry Kasparov back in 1997, was programmed by chess experts with a library of potential chess moves that it could pull from during the match. The game of Go, notably, has more potential moves than there are atoms in the universe. Simply programming AlphaGo with all potential Go moves wasn't an option.
So how did Google DeepMind design a program that's currently wiping the floor with Go world champion Lee Se-dol?
DeepMind needed to a design a program that could create strategies instead of relying on brute-force search to pull moves out of a library. To do this, it used a combination of three different artificial intelligence techniques. The first was supervised learning, where the team trained AlphaGo how to play the game well. The second was reinforcement learning through self-play, where AlphaGo played a ton of games on its own and used deep learning to determine how to play the game better. The last was Monte Carlo Tree Search, a really efficient way of searching potential moves. These, combined with Google DeepMind's access to enormous amounts of computing power, are how AlphaGo has beaten Lee Se-dol two matches to zero.
If you're interested in learning what AlphaGo's wins mean for the future of artificial intelligence, check out The Verge's detailed interview with Demis Hassabis, the founder of Google's DeepMind and the man behind AlphaGo.