Skip to main content

DeepMind’s founder says to build better computer brains, we need to look at our own

DeepMind’s founder says to build better computer brains, we need to look at our own

/

What AI can learn from neuroscience, and neuroscience from AI

Share this story

Illustration by James Bareham / The Verge

After decades in the wilderness, AI has swaggered back onto center stage. Cheap computer power and massive datasets have given researchers alchemical powers to turn algorithms into gold, and the deep pockets (and marketing prowess) of Silicon Valley’s tech giants haven’t hurt either.

But despite warnings from some that the creation of super-intelligent AI is just around the corner, those working in computational coal mines are more realistic. They point out that contemporary AI programs are extremely narrow in their abilities; that they’re easily tricked, and simply don’t possess those hard-to-define but easy-to-spot skills we usually sum up as “common sense.” They are, in short, not that intelligent.

AI can learn from neuroscience, and neuroscience can learn from AI

The question is: how do we get to the next level? For Demis Hassabis, founder of Google’s AI powerhouse DeepMind, the answer lies within us. Literally. In a review published in the journal Neuron today, Hassabis and three co-authors argue that the field of AI needs to reconnect to the world of neuroscience, and that it’s only by finding out more about natural intelligence can we truly understand (and create) the artificial kind.

The review takes a tour through the history and future of AI, and points out where collaboration with the field of neuroscience has led to new discoveries. Reconnecting the two disciplines will create a “virtuous cycle,” Hassabis and his co-authors write. AI researchers will be inspired by what they learn about natural intelligence, while the task of “distilling intelligence into an algorithmic construct [could] yield insights into some of the deepest and most enduring mysteries of the mind.” How’s that for win-win?

To find out more about what neuroscience and AI can learn from one another, we had a brief chat with Hassabis himself:

This interview has been lightly edited for clarity.

DeepMind founder Demis Hassabis, photographed in 2016.
DeepMind founder Demis Hassabis, photographed in 2016.
Photo by Sam Byford / The Verge

You’ve talked in the past, Demis, about how one of the biggest aims of DeepMind is to create AI that can help further scientific discovery, and act as a tool for increasing human ingenuity. How will neuroscience help you reach this goal?

There are two ways really. One is to use neuroscience as a source of inspiration for algorithmic and architectural ideas. The human brain is the only existing proof we have that the sort of general intelligence we’re trying to build is even possible, so we think it’s worth putting the effort in to try and understand how it achieves these capabilities. Then we can see if there are ideas we can transfer over into machine learning and AI.

That’s why I studied neuroscience for my PhD — to look into the brain’s memory and imagination; understand which brain regions were involved, which mechanisms were involved; and then [use that to] help us think about how we might achieve these same functions in our AI systems.

The other set of things we’d really like to understand is what intelligence is — including natural intelligence, our own minds. So I think there should be some flow back, from AI algorithms that do interesting things, that leads to ideas about how and what we should look for in the brain itself. And we can use these AI systems as models for what’s going on in the brain.

One of the things you talk about in the paper that AI needs is to understand the physical world like us — to be placed in a room and be able to “interpret and reason about scenes in a humanlike way.” Researchers often talk about this sort of “embodied cognition,” and say we won’t be able to create general AI without it, is that something you agree with?

Yeah, so, one of our big founding principles was that embodied cognition is key. It’s the idea that a system needs to be able to build its own knowledge from first principles — from its sensory and motor streams — and then creating abstract knowledge from there. And this was one of the big problems with classical AI, what was called the “symbol grounding problem.” The idea is that logic systems are fine when they’re just dealing with logic, but at the end of the day, when those logic systems interact with the real world, what do these symbols really refer to? This was one of the big stumbling blocks for classical AI, or what’s sometimes called Good Old-Fashioned AI.

[At DeepMind] we’ve always been interested in grounded intelligence, and that’s what we do with our AI systems, the ones that work in video games and virtual environments. They don’t use any of the hidden data in the game, for example. They just use the raw pixels on the screen, as if they were physically embodied in that virtual world.

DeepMind’s AlphaGo program famously beat champion Lee Se-dol at the board game of Go in 2016.
DeepMind’s AlphaGo program famously beat champion Lee Se-dol at the board game of Go in 2016.
Photo: Getty Images

A recurring theme in the paper is how neuroscience can help us move beyond the limits of contemporary AI, which is mainly about systems that handle one specific task, like identifying faces in photographs. There’s work being done by a few organizations like MIT and Google on how to combine different systems to create AI with more flexibility. Do you think we’ll still be using narrow forms of AI in the future, or will everything be done by more generalized systems?

It’s interesting, because the history of AI to date has been that specialized systems are obviously easier to write and to create, and you can hone their performance [to complete] whatever specialized task you’re trying to solve. So, it’s quite hard. The bar is quite high for a general system to be able to beat specialized systems. For a lot of tasks, then, it’s going to be better to have specialized AI systems, where you really understand the domain and you can codify it. Here, specialized AI systems are going to be hard to beat.

But, if you want to do things like making connections between different domains, or if you want new knowledge to be discovered (the sort of thing we like to do in science) then these pre-programmed, specialized systems are not going to be enough. They’re going to be limited to the knowledge that can be put in them, so it’s hard for those to really discover new things or innovate or create. So any task that requires innovation or invention or some flexibility — I think the general system will be the only to do that.

One bit of brain functionality that you mention as key to improving AI is imagination, and the ability to plan what will happen in the future. Could you give an example of where neuroscience has helped AI researchers give computers these sorts of skills?

Yeah, so this happens even just with basic, high-level ideas. Let’s take memory first, and then imagination. With memory, you’ve got multiple memory systems in the brain. There’s your short-term working memory, which you can use to remember things like telephone numbers. It’s thought to be seven units of information, plus or minus two. And then you’ve got your episodic memory, which is longer-term memory, which stores your experiences and replays them back while you’re sleeping, so you can learn from those experiences even while you’re asleep.

Neuroscience has already inspired new memory systems for AI

So just that idea, of having different types of memory systems, [has been really useful in AI]. Traditionally, neural networks don’t really have much memory. They’re kind of in-the-moment. And trying to really push that far is what made us come with the Neural Turing Machine, where we introduce this idea of having a big external memory connected to the neural network that the neural network can access and use. That’s a neuroscience-inspired idea.

Then, if you look at things like imagination, it’s the idea that humans and some other animals rely on generative models of the world they’ve built up. They use these models to generate new trajectories and scenarios — counter-factual scenarios — in order to plan and assess [what will happen] before they carry out actions in the real world, which may have consequences or be costly in some way.

Imagination is a hugely powerful planning tool [for this]. You need to build a model of the world; you need to be able use that model for planning; and you need to be able to project forward in time. So when you start breaking down what’s involved in imagination, you start getting clues as to what kind of capabilities and functions you’re going to need in order to have that overall capability.

If neuroscience and AI have so much to learn from one another, why do you think they grew apart in the first place?

Well, they actually started off pretty well-connected. Back in the day, a lot of neuroscientists and AI scientists had similar backgrounds. They talked to each other a lot in conferences and there was lots of collaboration. But then around the ‘80s there was a big move in AI away from neural network systems, and people like [AI pioneer Marvin] Minsky proved things about those primitive neural network systems — that they weren’t able to do certain tasks.

But, it turns out, they were wrong. Because they were looking at single-layer neural networks that were too simplistic. Now we work with deep learning systems, these very large networks. But back in the ‘80s, they didn’t have the compute power or the data to make these. So at the time, they diverged from neural-type systems to focus on logic systems. And logic systems really are quite far away from neuroscience. AI went down this experts systems route, where you had big stores of heuristics and rules, and you use those rules and heuristics to make decisions. And that’s more about databases than neuroscience.

“it’s difficult to be expert in one of those fields, let alone both.”

Meanwhile, neuroscience continued down its own direction and became a huge field itself. So now you’ve got two very, very large fields that are steeped in their own traditions. And it’s quite difficult to be expert in even one of those fields, let alone expert enough in both that you can translate and find connections between them.

If you’re an AI expert today and you have no neuroscience background at all and you try getting into it, it’s quite daunting. I think there’s something like 50,000 papers a year — I can’t remember the exact number — published in neuroscience. So there’s a huge body of work to try and make sense of, most of which is not going to be relevant to AI, meaning you’re looking for nuggets of crucial information in a huge haystack.

It’s a difficult thing to navigate, and for a long while, there’s been less collaboration between the fields because of that, and vice versa. The AI field is hugely technical and has a lot of its own jargon, and that’s hard for a neuroscientist [to understand].

It’s hard to find people who are willing to put the effort in to bridge those two very different fields, and that’s what we try to do here in DeepMind — find people who are capable of doing that, and find those connections and explain that to the other field in a succinct way.

Today’s paper, Neuroscience-Inspired Artificial Intelligence, is published in the journal Neuron, and can be read in full here.