Deep Blue developer speaks on how to beat Go and crack chess

IBM's supercomputer beat Garry Kasparov nearly 20 years ago

1

DeepMind’s exploits in South Korea have captivated the world this week, as its AlphaGo program has defeated Go champion Lee Se-dol three times to secure overall victory in a five-game series — something AI experts had previously predicted was decades away due to the ancient Chinese board game’s subtlety and intricacy.

Nearly two decades ago, IBM made headlines in much the same way when its Deep Blue computer defeated chess grandmaster Garry Kasparov (above). Murray Campbell was one of the key figures in Deep Blue’s development, and is still at IBM today, working as a senior manager in the company’s Cognitive Computing division, which handles the Watson AI platform.

I spoke with Campbell ahead of this week’s Go showdown to get his thoughts on how Deep Blue won and how things are different for DeepMind.

Deep Blue’s famous victory was nearly 20 years ago — how did you approach the challenge of solving chess and beating Kasparov back then?

We had actually started work on a chess program while we were graduate students at Carnegie Mellon University, and IBM hired the three of us to come and build the next chess machine, which became known as Deep Blue. Our approach was that we realised that a pure brute-force approach wouldn’t be good enough to beat the world champion, but on the other hand having a lot of computing power did make a difference and there was a documented relationship between the strength of a program and how fast it could calculate. So we combined some AI-type advances in algorithms, in search and evaluation, together with a large supercomputer-level machine to produce ultimately world champion-level chess. We actually lost in 1996 but came back the next year with a new and improved system and won in 1997.

On a human level, what was your motivation for getting into this in the first place? Was it personal interest in chess, or was it more of an abstract challenge for computing?

Well, I have to say it was both. I certainly did have an interest in the area of chess, I was a chess player before I was a computer scientist — I was champion of my province in Alberta, Canada at one point. But I could recognize that people who were really good at chess had something that I just didn’t have. That got me interested in what it would take to create a computer that could play at a high level. I’d always maintained an interest throughout my education, so when I joined IBM I saw this as an opportunity to finish this off and show that it could be done.

But separately from a personal interest, it’s something that had been set out as a challenge for computer science right from the earliest days. There’s a famous 1949 paper by Claude Shannon, who was a world-renowned mathematician, and he set out in this paper the process for what it would look like to create a chess computer, and saying that this is a grand challenge-class problem.

To what extent was it necessary to be proficient in chess yourself — was it a question of putting the rules in and working from there, or did the work need to be informed by your own experience?

I think it was important to have some knowledge of chess. It wasn’t important for us in the early stages of developing this to be really strong chess players, and we certainly weren’t, but when it got down to the final stages of preparation there are lots of little details about how the game is played and standard grandmaster practice, so we found it was helpful to bring in one particular grandmaster, Joel Benjamin, to consult with us for a period of time. And right toward the end we brought in additional grandmasters as sparring partners to assess how well our system was doing.

Was the goal to emulate a human play style, or to develop a system that could win at all costs?

I think we were not trying to emulate human style at all — only to the extent that humans played well in most positions so we wanted to play well in most positions. The human style is fairly well studied, it’s not completely understood but there have been studies by psychologists going back decades, and the consensus is that strong chess players or grandmasters will look at just a small number of moves and positions as they consider what they’re going to do. Sometimes they have to calculate enormously deeply to decide what to do and sometimes they won’t, but they have very sophisticated evaluation of position and searching mechanisms for deciding which options to explore. So it’s exceedingly difficult to emulate that style of play.

Initial work in artificial intelligence did try to create computers that played in a more human style, and they were easily beaten by computers that used a more "computer" style, you could say, by looking at as many possible positions as they could with fairly shallow evaluation. But just by sheer comprehensiveness of the search they reached a respectable level of play. and then we realised that that by itself wasn’t enough, you had to actually emulate certain aspects of human play. Humans are very good at following the critical lines of play very deeply, and we needed to make our system able to do that too, and that’s one of the things that was important for Deep Blue’s success.

go-board-game-Amelia Krales-01

By 1997, would most players have looked at how the Kasparov matches went down and considered Deep Blue’s style to be very unusual?

Yeah, there’s this expression that’s been used for quite some time now — you see a move that a computer plays that’s very unexpected or unintuitive and they call it a "computer move." It’s just that people think in a certain way and certain kinds of moves just don’t suggest themselves to even very strong human players. That’s why computers can beat people even though they, in some sense, don’t evaluate positions as accurately: they can beat people because they can just see moves that people don’t tend to see because they’re so unusual. I would say, however, that it’s sort of interesting that the current generation of young grandmaster players seem to be more capable of playing these computer moves than perhaps the older generation. I think they’ve grown up playing against computers and some of it’s rubbed off!

So in the end, the superiority of computers is making humans better as well.

I think that’s absolutely true, yes.

"It’s very difficult to evaluate a Go position just by looking at it."

How different is Go from chess in terms of the computer science required?

I don’t play Go, I’ve only played a few games in my life, but I certainly know a fair amount about it. Both games are immensely huge and once you get past 10 to the hundredth power, 10 to 120, 10 to 170 [in number of possible positions], they’re all just immensely huge, very complex games. But Go has the characteristic that wasn’t true in chess, that it’s very difficult to evaluate a Go position just by looking at it. A medium-good chess player like myself can sit down and in a few hours probably write an evaluation function that is pretty good at evaluating chess positions — nowhere near grandmaster level, but it’s good enough that when you combine it with the search it produces very high quality play.

But Go is a game that builds up over time, builds up structures and interacts in complex ways. Chess is a game where the pieces move around more. There’s not as much static structure and you can come up with a pretty good estimation of who’s winning just by counting up the pieces and seeing who has more pieces. Now obviously it’s much more complex than that, but that’s a pretty good rule of thumb that you can use. In Go that’s not at all true, you can’t just count up the pieces because in general each side probably has approximately the same number. It’s much more difficult to come up with evaluation and so I think one of the advances that [DeepMind] made is coming up with a better way of evaluating positions through using a machine learning approach.

go-board-game-Sam Byford-01

What do you think of AlphaGo — had these techniques been available at the time, would they have worked for Deep Blue as well?

That’s a really, really good question, and I have talked about that with DeepMind people. First of all I think it’s very impressive; they’ve clearly advanced the state of the art and if they can show that it’s a general mechanism that works not only for Go but many other games as well I’ll be even more impressed. But it’s still just on the face of it quite impressive to produce an advance of the size that they did.

Now the question of whether this approach would work well on chess, I suspect that it could perhaps produce a program that is superior to all human grandmasters, but I don’t think it would be state of the art. I think that current chess programs are incredibly strong and very much superhuman, and I don’t think this approach as it stands would create a state-of-the-art chess program that’s better than all existing programs. And why I say that is that chess is a qualitatively different game on the search side — search is much more important in chess than it is in Go. There are certainly parts of Go that require very deep search but it’s more a game about intuition and evaluation of features and seeing how they interact.

So there’s really no substitute for search, and modern programs — the best program I know is a program called Komodo — it’s incredibly efficient at searching through the many possible moves and searching incredibly deeply as well. I think it would be difficult for a general mechanism, had it been created in AlphaGo and applied to chess, I just don’t think it’d be able to recreate that search and it’d require a different breakthrough.

The thing with AlphaGo is that it plays itself and has a seemingly inexorable rise in its own power...

Yeah, I agree that it appears to be improving when playing against itself. I’m not sure that they’ve found the limit on that, and it may continue to improve over time. But I just have a feeling based on what I know about the two games that this approach applied directly to chess wouldn’t beat current state-of-the-art programs.

How important is processing power to either game at this point?

Well, processing power is less important in chess. The more processing you throw at it the better it gets, but your smartphone will probably beat just about anybody in the world. The algorithms have been refined so much that the programs are extremely efficient. The AlphaGo system uses a lot of processing power — the paper that they published shows it to get better if you add more CPUs and GPUs, and my guess is that the system that plays next week will be rather large with a lot of computing power. [NB: DeepMind announced this week that it’s started to reach the point of diminishing returns in this regard, and that the system playing Lee Se-dol is more or less the same in power as the one that beat Fan Hui last year.]

 Sam Byford

Not to stick your neck out or anything, but do you have a prediction for the match?

From one data point it’s hard for me to tell. They did beat the European champion [Fan Hui] but they lost some casual games to him. Assuming that they’re somewhat better, Lee Se-dol is clearly a couple of classes at least better than the European champion... If I have to put my neck out? I’d say that the computer will win but it will be close.

If AlphaGo does win, what does this mean for the field in general — is there a next obvious goal or milestone?

I think from my point of view it would suggest that research on games like chess and Go would start to wind down. I think there would be a good mechanism for handling these games. Partly because even though they’re immensely complicated in one sense they’re very simple in another way -- they’re perfect information games, they’re zero-sum games, they’re turn-taking games, and so there’s no chance element in it. And that doesn’t reflect problems in the real world. There are very few problems where all the things you need to know are right in front of you and when you take an action or make a decision, the consequences of that decision are completely clear to you.

"Applying This to real-world problems is where the action will be."

So I think it will be interesting to see them apply their system to other games, and it will be interesting to see them apply the system to Go without any human input — their system now does use a lot of human experience and human move decisions that got the system off the ground. Those will be interesting milestones to prove, but my feeling is that we’re moving beyond board games. There are other games that are interesting that have hidden information in them, that have random chance that are still interesting, but then there are real-world problems that will have a lot of value if we can start applying the kind of techniques that they’re developing — the kind of techniques that we’re developing at IBM for example in Watson, I think applying those to real-world problems is where the action will be.

One other thing I would mention, just to continue this thought: as we move to these real-world applications, I don’t think it’s reasonable to believe that the systems will be superhuman really quickly. I think they will be very good at certain things and rather poor at other things, and I have a feeling there’ll be a complementarity there with things people are good at and things machines are poor at and vice versa, so the two working together would be able to complement each other.

Read more: DeepMind founder Demis Hassabis on how AI will shape the future

What are the most immediate real-world uses that you would highlight?

I think some of the most interesting ones are in healthcare. This is a space where I don’t expect computers to come in and start making decisions, but I think they’ll make human decision-makers much better by analyzing the data — running through complicated algorithms to see patterns in the data and provide insights to human decision-makers like doctors and radiologists etcetera, to allow them to be more efficient at making decisions but also more accurate. I think they’ll have more information at their fingertips and more recommendations on different courses of action with possible benefits. So I think it’ll make human decision-makers better.

The Verge is in Seoul for the entire Google DeepMind Challenge Match series — follow our coverage at this dedicated hub, and watch the matches live on YouTube.

Loading comments...