After AlphaGo, what's next for AI?

Deep learning will help us do more than play games

16

AlphaGo’s victories against legendary Go player Lee Se-dol over the last few days mark a major milestone in AI research. The complex Chinese board game had long been considered impossible for computers to crack, but DeepMind used machine learning and neural networks to give its AlphaGo AI the ability to evaluate and execute strategy at a world-class level.

But you don’t put some of the most intelligent people in the world to work on artificial intelligence just to beat board games. DeepMind’s work has major implications for the field of AI, and the deep-learning technology it uses has the potential to revolutionize everything from the way you use your phone to the way you drive your car — or the way your car drives you.

 Photo by Sam Byford / The Verge

First of all, though, there might still be things left to achieve with Go. Ke Jie, an 18-year-old Go virtuoso from China ranked #1 in the world, seemed cautiously optimistic about his own chances following Lee’s first defeat last week, saying "it's 60 percent in favor of me." And many Go players have said they want to learn as much about AlphaGo as possible — after all, it’s only ever played a handful of games in public, demonstrating unorthodox, crushing tactics. It seems likely that AlphaGo will eventually be released to the public, and don’t be surprised to see a match against Ke at some point; Lee Se-dol was chosen for his iconic stature and long career, but Ke is considered the stronger player today. DeepMind founder Demis Hassabis (above) has also said the company plans to test a version without any human training at all — just the program teaching itself.

But either way, the question of whether a computer can play world-class Go has now been unambiguously settled. And as far as perfect information games — where all the data is out there on the board for all to see — are concerned, there isn’t really anything left to achieve. There are imperfect information games, like multiplayer no-limit poker, that AI still has trouble with, but the next frontier is likely to be video games — I’ve heard Blizzard’s real-time strategy classic StarCraft brought up several times in the past weeks. Given StarCraft’s enduring popularity and stadium-filling status in South Korea, it’s not hard to imagine a high-profile future showdown that really puts the e- in e-sports.

Read more: Why Google's Go win is such a big deal

Hassabis seemed open to the StarCraft idea when I asked him about it last week — read the full interview here — but he also said that DeepMind is only interested in games that lie on the main track of its research. "It’s to the extent that they’re useful as a testbed, a platform for trying to write our algorithmic ideas and testing out how far they scale and how well they do, and it’s just a very efficient way of doing that. Ultimately we want to apply this to big real-world problems."

These problems could be anything where human decision-making could benefit from faster learning and more efficient data processing. Machine-learning techniques and deep neural networks are already in wide use at Google, for example, in its search and self-driving car programs. The lessons of AlphaGo could yield incremental improvements in any of these areas; you’ll probably see the benefits without even realizing it.

google-brain-jeff-dean-"Sam Byford"-01 Sam Byford

Jeff Dean (above), a computer scientist whom many at Google describe as the smartest person at the company, heads the Google Brain deep-learning research project and has spearheaded the implementation of the concept across many of the company’s products. A new deep-learning neural network called RankBrain is now the third biggest signal for ranking results in Google search — Dean won’t reveal the first two — and the company credits it with the biggest improvement to search ranking in over two years. Machine learning is also used in more obvious, user-facing ways for things like search in Google Photos and automatically generated replies in Inbox.

Google is, of course, a company that makes the vast majority of its money from its ability to collect data and sell advertising against it, and it’s easy to see how technology to make that data collection more efficient would be appealing. "I don’t think it’ll be one or the other," said Dean, when I asked whether machine learning is more likely to bolster Google’s core business model or help it break into new areas. "We’ll use these techniques to really improve our core products, and in a lot of cases that higher level of understanding you can get about data will really help us build new features. But also it’s going to enable us to build new and interesting products that wouldn’t have been possible before, possibly in areas we’re not really working in today. So it’s going to be both — I don’t know which is going to be the more important of the two, but I think it’ll be roughly equal."

"Think about all the things Google does that are big," Alphabet chairman and former Google CEO Eric Schmidt (below) told me when I asked how machine learning will boost the company’s business. "We have lots of searches, lots of ads, lots of customers, lots of data centers, we have lots of people using Google compute, we have lots of people using our security software, over and over again. Whenever you have a large number of people using something, we can probably use machine intelligence to make it more efficient by watching and training against the signal."

"I don’t think there’s any space where we wouldn’t be using this," Schmidt continued, listing the company’s traditional search and ad business, self-driving cars, and a healthcare division called Verily. "To me this technology is something that we’ll be using in every one of the Alphabet companies."

As a company, DeepMind is kept largely separate from the rest of Google, though it does communicate with Brain. "We have a pretty free rein over what we want to do to optimize the research progress," Hassabis told me. "Of course, we actually work on a lot of internal Google product things, but they’re all quite early stage, so they’re not ready to be talked about." Hassabis says that Brain’s projects work on a shorter research cycle than DeepMind’s, and that coupled with its Mountain View location means it tends to have more of a product focus.

eric-schmidt-google-alphabet-"Sam Byford"-01 Sam Byford

So what is DeepMind going to do next? Well, it’s important to note that AlphaGo isn’t its only or even its biggest project — only 15 employees out of hundreds are working directly on it. DeepMind has identified smartphone assistants, healthcare, and robotics as its ultimate targets, and while AlphaGo is very much just a system for playing Go, Hassabis says its principles are applicable to real-world problems.

Hassabis thinks we’ll start seeing smartphone assistants bolstered by advanced machine learning within the next several years. "I mean, it’ll be quite subtle to begin with, certain aspects will just work better. Maybe looking four to five, five-plus years away you’ll start seeing a big step change in capabilities."

"I just think we would like these smartphone assistant things to actually be smart and contextual and have a deeper understanding of what you’re trying to do," says Hassabis, who believes that systems like this need to be grounded in learning techniques like AlphaGo rather than following pre-programmed conversation paths. "At the moment most of these systems are extremely brittle — once you go off the templates that have been pre-programmed then they’re pretty useless. So it’s about making that actually adaptable and flexible and more robust."

Healthcare is a little further away. DeepMind has announced a partnership with the UK’s National Health Service, but so far the only details relate to a basic data-tracking app — Hassabis says that the first goal is to get the NHS used to using modern mobile software at all before including more advanced tools.

"We're training Watson now to see."

IBM has already made moves into the space with its Watson "cognitive learning" platform, which uses somewhat different techniques to DeepMind. The system started out as a single Jeopardy!-playing supercomputer, but has since migrated to the cloud and uses tools like predictive analytics and personality insights. So far, the system is being used in partnership with Memorial Sloan Kettering to support physicians in diagnosing breast, lung, and colorectal cancers in two hospitals in Thailand and India; while it won’t diagnose disease itself, it’s able to flag things that a physician should take a closer look at and suggest possible treatments.

"We're training Watson now to see," says Kathy McGroddy, VP of Watson Health. "Watson has been learning image analysis for many years, and we now have image data from our acquisition of Merge Healthcare to speed up these capabilities. So Watson will be able to not only identify anomalies in medical images, but will also understand what they mean in the context of broader information like data from a person's Fitbit."

ibm-watson-health-"IBM"-01 IBM

The final, and probably furthest-off, major use case for AI that people are talking about today is robotics. Google has been active in the space with its acquisition of companies like Boston Dynamics, along with its self-driving car project. "I think robotics is a really good example [of what’s possible,]" says Google’s Jeff Dean. "We bought a bunch of robotics companies but the ability to take deep learning and apply that to robotics, especially driven by vision, is going to be a pretty interesting and important direction for the next few years."

Hassabis says he hasn’t thought much about robotics yet. "Obviously the self-driving cars are kind of robots but they’re mostly narrow AI currently, although they use aspects of learning AI for the computer vision — Tesla uses pretty much standard off-the-shelf computer vision technology which is based on deep learning." Hassabis cites household cleaning or elderly care as potential avenues for learning AI-powered robots, but it’s clear that the concept is some way off.

For now, though, AlphaGo’s success has captivated the world and reignited mainstream interest in AI, even if its specific use cases are limited. The very idea of computers outwitting humans at tasks previously thought to require unquantifiable intuition has proven seductive.

And disturbing, to some. While covering AlphaGo this past week I’ve seen reaction ranging from mild disappointment to outright fear that computers have demonstrated superiority in yet another field. This is understandable, but I think it misses the actual story that transpired this week. Real, live humans built AlphaGo, solving one of the oldest, biggest challenges in the field that they’ve devoted their careers to. The implications of what DeepMind has achieved are profound, and could have a hugely positive impact on the way we live our lives in the future.

As Eric Schmidt said at the match’s opening ceremony, "The winner here, no matter what happens, is humanity."

Loading comments...