Skip to main content

Facebook’s head of AI wants us to stop using the Terminator to talk about AI

Facebook’s head of AI wants us to stop using the Terminator to talk about AI

/

Yann LeCun chats about super-intelligent AI and the future of virtual assistants

Share this story

Yann LeCun
Yann LeCun
Image: Facebook

Yann LeCun is one of AI’s most accomplished minds, so when he says that even recent advances in the field aren’t taking us closer to super-intelligent machines, you need to pay attention.

LeCun has been working in AI for decades, and is one of the co-creators of convolutional neural networks — a type of program that’s proved particularly adept at analyzing visual data, and powers everything from self-driving cars to facial recognition. Now, as head of Facebook’s AI research facility FAIR, he helps AI make the journey from the lab to the real world. His team’s software automatically captions photos for blind users and performs 4.5 billion AI-powered translations a day.

“We had a bigger impact on products than Mark Zuckerberg expected,” LeCun told The Verge over Skype recently. But, as he explained during the interview, it’s clear to him that AI still has a long, long way to go before it approaches anything near the intelligence of a baby, or even an animal. Oh, and if you don’t mind, he’d really like it if we all stopped using Terminator pictures on AI articles.

The interview below has been lightly edited for clarity.

One of the biggest recent stories about Facebook’s AI work was about your so-called “AI robots” getting ”shut down after they invent their own language.” There was a lot of coverage that badly misrepresented the research, but how do you and your colleagues react to those sorts of stories?

So the first time you see this, there is a smile and a laugh. And then it depends on how much pick-up there is. With this particular story there was one article and then it blew up, and then it’s like hair-pulling. “They’re getting it completely wrong!”

It’s instructive for us, because it gives us an idea of how the media can operate, and we have several ways of reacting to this. I made a quick post on Facebook, saying this was ridiculous, trying to take the humorous side, until we could be more serious. We talked to a bunch of journalists who wanted to get the real story, and wrote other stories about how this was a complete misrepresentation.

Over the past few years, do you think we’re seeing more or less of this sort of coverage?

Less, in the sense that the people in the media and the public seem to be a little more aware about what the story is. It used to be that you could not see an article in the press [about AI] without the picture being Terminator. It was always Terminator, 100 percent. And you see less of that now, and that’s a good thing [...] Occasionally, though you see certain press that raise an issue in a way that reveals a complete misunderstanding of what goes on.

Facebook Prineville Data Center
Facebook’s Prineville Data Center. One of the many locations it serves up AI-powered features like image captioning and translation.
Vjeran Pavic

When you see that sort of coverage, what’s the message you want to get across to people? What do you say to them?

I keep repeating this whenever I talk to the public: we’re very far from building truly intelligent machines. All you’re seeing now — all these feats of AI like self-driving cars, interpreting medical images, beating the world champion at Go and so on — these are very narrow intelligences, and they’re really trained for a particular purpose. They’re situations where we can collect a lot of data. 

So for example, and I don’t want to minimize at all the engineering and research work done on AlphaGo by our friends at DeepMind, but when [people interpret the development of AlphaGo] as significant process towards general intelligence, it’s wrong. It just isn’t. it’s not because there’s a machine that can beat people at Go, there’ll be intelligent robots running round the streets. It doesn’t even help with that problem, it’s completely separate. Others may claim otherwise, but that’s my personal opinion.

“there’s no danger in the immediate or even medium term.”

We’re very far from having machines that can learn the most basic things about the world in the way humans and animals can do. Like, yes, in particular areas machines have superhuman performance, but in terms of general intelligence we’re not even close to a rat. This makes a lot of questions people are asking themselves premature. . That’s not to say we shouldn’t think about them, but there’s no danger in the immediate or even medium term. There are real dangers in the department of AI, real risks, but they’re not Terminator scenarios.

One thing DeepMind does say about its work with AlphaGo is that the algorithms it’s creating will be useful for scientific research, for things like protein folding and drug research. How easy do you think it’s going to be to apply this sort of research elsewhere in the world?

So, AlphaGo is using reinforcement learning. And reinforcement learning works for games; it works for situations where you have a small number of discrete actions, and it works because it requires many, many, many trials to run anything complex. AlphaGo Zero [the latest version of AlphaGo] has played millions of games over the course of a few days or few weeks, which is possibly more than humanity has played at a master level since Go was invented thousands of years ago. This is possible because Go is a very simple environment and you can simulate it at thousands of frames per second on multiple computers. [...] But this doesn’t work in the real world because you cannot run the real world faster than real time. 

The only way to get out of this is to have machines that can build, through learning, their own internal models of the world, so they can simulate the world faster than real time. The crucial piece of science and technology we don’t have is how we get machines to build models of the world.

The example I use is when a person learns to drive, they have a model of world that lets them realize that if they get off the road or run into a tree, something bad is going to happen, and it’s not a good idea. We have a good enough model of the whole system that even when we start driving, we know we need to keep the car on the street, and not run off a cliff or into a tree. But if you use a pure reinforcement learning technique, and train a system to drive a car with a simulator, it’s going to have to crash into a tree 40,000 times before it figures out it’s a bad idea. So claiming that somehow just reinforcement learning is going to be the key to intelligence is wrong.

Do you think then, that AI is still missing some basic tools it needs to get beyond its current limitations? [AI pioneer] Geoffrey Hinton was quoted talking about this recently, saying that the field relies a bit too much on current methods and needs to “throw it all away and start again.”

I think what he said was a little over-interpreted [but] I totally agree [we need more basic AI research]. For example, one of the models that [Hinton] likes is one he came up with in 1985 called Boltzmann machines [...] And to him that’s a beautiful algorithm, but in practice it doesn’t work very well. What we’d like to find is something that has the essential beauty and simplicity of Boltzmann machines, but also the efficiency of backpropagation [a calculation that’s used to optimize AI systems]. And that’s what many of us — Yoshua [Bengio], Geoff and I — have been after since we restarted work on deep learning in the early 2000s. What was a little surprising to us is that in the end, what ended up working in practice is back prop with very deep networks. 

Facebook stock image
Facebook is putting a lot of effort in researching virtual assistants — but it’s far behind competitors like Amazon’s Alexa.

So given that the very big changes in AI are much further down the line, what do you think is going to be most useful to consumers in the short term? What is Facebook planning in this regard?

I think virtual assistants really are going to be the big thing. Current assistants are entirely scripted, with a tree of possible things they can tell you. So that makes the creation of bots really tedious, expensive, and brittle, though they work in certain situations like customer care. The next step will be systems that have a little more learning in them, and that’s one of the things we’re working on at Facebook. Where you have a machine that reads a long text and then answers any questions related to it — that’d be useful as a function.

The step beyond that is common sense, when machines have the same background knowledge as a person. But we’re not going to get that unless we can find some way of getting machines to learn how the world works by observation. You know, just watching videos or just reading books. And that’s the critical scientific and technological challenge over the next few years. I call it predicted learning, some people call it unsupervised learning.

There’s going to be continuous progress [on these tasks] over the next few years as virtual assistants become more and more useful and less and less frustrating to talk to. They’re going to have more background knowledge, and do more things for people that are not entirely scripted by the designers. And this is something Facebook is very interested in.