Google’s AI AlphaGo has done it again: it’s defeated Ke Jie, the world’s number one Go player, in the first game of a three-part match. AlphaGo shot to prominence a little over a year ago after beating Korean legend Lee Se-dol 4-1 in one of the most potent demonstrations of the power of artificial intelligence to date. And its defeat of Ke shows that it was only getting started.
“I think everyone recognizes that Ke Jie is the strongest human player,” 9th-dan professional and commentator Michael Redmond said before the match. And despite defeat, Ke’s strategy suggested that the 19-year-old Chinese prodigy has actually learned from AlphaGo’s often unorthodox approach. “This is Master’s move,” said Redmond of one of Ke’s earliest plays, referring to the pseudonym that AlphaGo used for a recent series of online matches in which it racked up a 60-game winning streak.
AlphaGo won by just half a point, the closest margin possible, but that’s characteristic of its playing style. The AI doesn’t appear to care about the margin of victory, instead choosing moves that it has determined are the most likely to lead to a win. The result was technically close, but AlphaGo looked like winning from a relatively early stage in the game.
“I think it was a really wonderful game,” DeepMind CEO and co-founder Demis Hassabis said at the post-game press conference. “Huge respect to Ke Jie for playing such a great game and pushing AlphaGo to its limits.”
Ke and AlphaGo are facing off as part of the Future of Go Summit being held by Google in Wuzhen, China, this week. The second game will be on Thursday (China time; Wednesday evening in the US) while the finale will be on Saturday. Friday will see AlphaGo further put to the test in two stipulation matches; one where it acts as a teammate to two Chinese pros playing each other, and another where it takes on five Chinese pros all at once.
Comments
a machine called
the master
great…
By cy.starkman on 05.23.17 4:05am
It’s called singularity and we are doomed, baby!
By corneliu dabija on 05.23.17 4:26am
I hear you, but regardless of the emotional assets of the thought what makes us a bit sad hearing this, the whole alphago is just a trained algorithm on a bunch of games played by us humans, there is nothing extraordinary in it by the measures of a human player.
It’s more of a reflection of the players and the yet simple but powerful logic of this game, built on a gigantic set of data, and a polished "deeplearning" algorithm from the 70s backed with nearly unlimited processing power.
By vilmosk on 05.23.17 4:24am
At first I wanted to agree with you. But on a second thought, you are describing how we humans learn as well. Ke Jie did nothing more than playing a huge amount of games, and being good at recognizing Go "micro" and "macro" patterns. Same as AlphaGo.
What would be truly impressive (and what others probably imply) is if based solely on the rules of the game (and no previous training) an AI could beat a top Go player.
By maaaaaattttt on 05.23.17 5:14am
well, the computer can only do this, I wouldn’t call it intelligent.
this is machine learning. super cool, but still totally dumb by human standards
By mkln on 05.23.17 8:59am
Other than the fact that this is what people say everytime the boundaries of AI are pushed further ("But Deep Blue is not real AI because in the end it’s just a series of relatively simple algorithms we can understand", as if AI is supposed to be magical), your description of AlphaGo is inaccurate.
It only uses "a bunch of games by us humans" to bootstrap its learning, and it’s really in the Reinforcement Learning step that things get interesting, as it plays against different versions of the model and improves that way.
If AlphaGo would simply learn from other players, it simply cannot win against the best player in the world. Also, keep in mind that the state space of Go is humongous, so this AI is also generalizing its knowledge. The fact that it mostly learns by playing against itself is why it does moves that all Go experts considered foolish at first (counter to any known strategies) and ended up being described as creative, innovative and even changed how other players see the game.
By kachkach on 05.23.17 12:18pm
That last point especially.
Humans have been playing this for hundreds of years, and this program through playing itself developed a strategy that the masters of the game hadn’t thought of, and has since influenced human strategy. That’s just completely on another level.
By Di Vergent on 05.23.17 12:27pm
This part is wrong… AlphaGo trained by playing itself A LOT, using reinforced training.
Thats a lot harder than training like normal humans would do, playing against different players.
By armrek on 05.23.17 12:37pm
Well it’s actually simpler, it could have uncovered moves that we wouldn’t think of because it wouldn’t make sense for us the first time, until the opponent gets beaten.
By vilmosk on 05.24.17 2:17pm
Thats actually what it did, they are called "The Master" moves, Ke Jie ttried one of those moves in the 2nd game
By armrek on 05.25.17 1:31pm
"…nearly unlimited processing power" Denis ,at the end of the match, said it was run on a single server with a Google TPU. and required 1/10th the compute power of the version that beat Lee Sedol.
By Javaone on 05.23.17 1:33pm
But was it thought on a single server? Running an algo is nothing, when it learns that is what takes a fton of power.
By vilmosk on 05.24.17 2:18pm
thought/taught
By vilmosk on 05.24.17 2:24pm
One of the amazing thing is that, AI have read from human play data and played against themselves, and then used some ways to win against human players that human did not believed to work before. To the extent that in the previous match against Lee Sedol, when Alphago committed some mistakes that would have been laughed off as beginner’s move if they’re played by human, professional players are not really sure about are they actually mistakes, or is there any deeper meaning to those seemingly problematic moves.
p.s. In yesterday’s press conference, some reporter tried to ask could alphago work without using human data as feed. Unfortunately it have been mistranslated into could alphago work without human.
By c933103 on 05.24.17 3:47am
the human mind is still superior in many aspects playing this game! The fine motor control of manipulating stones onto a board for example. No one taught the human how to pick stuff up. Alpha go hasn’t won at a game until it can sit down by itself and play someone.
By Rubber Duck on 05.23.17 4:37am
I think this is something that a lot of us mistaken AI with, it is not an entity.
By vilmosk on 05.23.17 4:42am
You don’t seem to understand the real purpose of AI then
By hsjhdjksahjkdhasjkdhsajkhdjksa on 05.23.17 4:55am
It’s artificial intelligence, the ‘host’ that contains the intelligence doesn’t really mater. By your logic Stephen Hawkings was incapable of putting pen to paper so is less ‘superior’ to a less intelligent person because he doesn’t have the fine motor skills to do so.
By FallenRider on 05.23.17 6:42am
https://www.theverge.com/2017/5/16/15648158/openai-elon-musk-robotics-ai-one-shot-imitation-learning
to add to your point, no one really taught this robot how to pick up. it would be twaddle to link this to Alpha Go
By cy.starkman on 05.23.17 7:18am
He does have a point though. The human mind was evolved to deal with the real world that is constrained by the rules of physics, whereas highly abstract games like chess and go can be represented efficiently in computer memory. Ke and Lee, despite being the top human pros, can only play but a tiny fraction of the games AlphaGo trained on in their entire lives.
Or put another way, what if AlphaGo can only train against itself by playing out each game on a physical game board like humans do? It will take millions and millions of years for it to reach the level of human pros.
By sadboyzz on 05.23.17 10:43am
But then again that means AI can practice in simulations a billion times and become better at any game than we can. Making AI superior. Imagine the applications of this in the real world.
By Stone Cold Dan Quinn on 05.23.17 3:54pm
Superior in, decision making given a static set of parameters and expected outputs.
By vilmosk on 05.24.17 2:20pm
For now, that’ll change.
By Stone Cold Dan Quinn on 05.26.17 8:51am
Humans also imagine things. We do simulations in our minds. During a Go game, players can’t actually "try" moves, they have to imagine the moves before playing. Much of the learning occurs without moving stones.
In that respect AlphaGo is superior to the human mind, by far.
Misplaced pride is a typical reaction to improvements in AI, looking hard for reasons to not classify the AI as "better than humans" even when it obviously is.
By wagaf on 05.24.17 11:38am
https://www.theverge.com/2017/5/16/15648158/openai-elon-musk-robotics-ai-one-shot-imitation-learning
no one really taught this robot how to pick up either. it would be twaddle to link this to Alpha Go
By cy.starkman on 05.23.17 7:17am