Few subsidiaries at Alphabet Inc. inspire as much curiosity as Google X, now called simply “X.” X is the company’s innovation lab, where ambitious but far-fetched tech ideas are pitched, tested, and either come to life or are ultimately killed. It’s where Google’s self-driving car concept was developed, where giant internet access balloons were conceived, where glucose-monitoring contact lenses were first experimented with, and where burrito-delivering drones are part of a beta test for bigger things.
And while more than 250 employees are behind these far-fetched projects, for the past five years the face of X has been Astro Teller, the so-called “Captain of Moonshots.”
Teller has the CV of a mad scientist: he has degrees in computer science, symbolic and heuristic computation, and a PhD in artificial intelligence from Carnegie Mellon University. After his time in academia, he founded health-tracking company Body Media (acquired by Jawbone) before joining Google X. With his goatee, ponytail, and the Rollerblades he wears everywhere, all day, Teller has become one of Google’s most recognizable characters. He is a published author of both fiction and nonfiction books; he gives TED Talks.
But for someone so heavily involved in futuristic projects, Teller stubbornly refuses to predict the future. For him, X is not a lab that churns out immediately useable technology or marketable products, but a place where innovation is “systemized” — imagine Henry Ford’s assembly line, but for ideas. He’s less inclined to prognosticate about solutions than he is to talk about the problems that will need to be addressed in the future — whether it’s the “meta problem” of climate change; the threats and promises of artificial intelligence; or just how, exactly, society’s acceptance of new technology will match the rapid pace of innovation.
What does a day look like five to 10 years from now?
The real answer is I don’t know, and much more importantly, I don’t think anyone knows. Trying to prognosticate is a very dangerous business. It’s good for people who are on the speaking circuit, but there isn’t any evidence that anyone is any good at it.
The way I would like to function, the way I think most of us here at X function, is to focus instead on asking the questions for the kinds of futures that we imagine might be possible. How fast can we discover that we’re wrong, get rid of those ideas or evolve them from where they currently are, to correctly pointed ideas?
You’ve been quoted as saying that your team doesn’t fall in love with solutions, you fall in love with problems. Five to 10 years from now, what are our biggest problems?
I’m a big believer in falling in love with the problems, not falling in love with the technologies. But falling in love with the problem doesn’t always teach you exactly what to do.
For example, you could observe that starting around 10,000 years ago humanity stopped primarily hunting for meat. We started domesticating animals instead. Yet we still get more than half of the fish that we eat in the world by hunting it. That’s just weird. Surely that won’t stay the way it currently is. In the future we will surely be farming in the sea and not just the standard fish farming that happens on shore and very near to the shore.
Climate change is a huge problem, almost a meta problem. It has a set of problems within it. Just because we’ve identified a problem, that means it’s on our radar screen. It doesn’t mean that we’ve found a solution to it yet or even working on the solution yet.
Let’s talk a little bit about the fear of change — how do you prepare society at large for radically new technologies?
Historically, changes in our society, particularly those driven by technology, used to take a long time. One thousand years ago, when somebody came up with a new technology the time between when the technology was invented and when it was widespread in the world… was huge.
That gave us several generations during which people could come to terms with how society was being changed by that technology. One hundred years ago when the steam engine was introduced or the telegraph or the telephone, [and] somewhat later the television, those things spread through humanity much faster. They spread maybe on the order of 10 to 20 years.
Fast-forward to today, the time between when a new technology is introduced and when it’s completely changed the world has continued to shrink at a fast rate. It’s now probably five to seven years between when a new technology is introduced and when it really has changed society in a fundamental way. If the world is now changing faster than we can accommodate, it causes a huge incremental level of anxiety for society at large. That is our challenge.
What can be done about that? We can point to specific parts of society which we know we could make better. The patent system was built 130–140 years ago around the idea that you would be granted a temporary monopoly for your idea, it would last about 20 years and you would harvest a lot of value during that time. Then it would be free to everybody afterwards.
Today that’s still true. It’s still a 20-year license that you’re getting. [But] now technology is changing so fast, by the time you get the patent it’s often not worth that much anymore because [it’s] old news…
The way we make laws and regulate technology is another good example where the pace at which we understand the technology and then build laws around that technology is now going sufficiently slower than the technology itself.
We need to fix that. We need to go faster on those fronts.
Whose responsibility is it to mitigate some of that anxiety around technology? The technologists? The regulators? Or is society supposed to adapt more quickly?
I think that the onus for how we help society to adjust to new technologies falls on all of us. Does it fall on technologists? Absolutely. Technologists should be making responsible technologies. They should be working hard to educate the world about the ramifications of the technologies such as they can foresee them. That doesn’t mean no one else has any responsibility.
The rate at which society copes with new technologies and their ramifications for society is partly rate limited by how we educate both the young people and adults in this country. If technology is continuing to change faster and faster and we don’t get better and better at educating our children to adapt to these changes, then we, the public sector and the education system, are failing our children.
I don’t mean to suggest that technologists don’t have a role to play in how society adapts to new technologies, but certainly other aspects of society also have to pitch in so that society can really elegantly and smoothly keep pace with the technology changes that are happening.
You’re part of a group called AI100 that’s doing a very long-term study on the impact of artificial intelligence. The researchers involved have said they don’t consider it likely that, in the near term, AI systems will autonomously choose to inflict harm on people. But they also noted that it is possible for people to use AI systems for harmful — as well as helpful — purposes. How do you see AI impacting society?
Artificial intelligence is going to turn out, I predict — it’s a dangerous thing as I’ve said to try to predict the future — [to be] technology that profoundly changes the world. We will come to see [it] the way we see electricity. It will be in everything. It will power almost everything, but we will rarely stop to think about it in very much the way that electricity has changed so much of our lives and yet we now take so much for granted.
I’m sure there will be abuses of artificial intelligence in some ways. I’m also confident that on balance, like electricity, artificial intelligence will be a lever for the human mind. It will make it so that the things around us make our lives better. There’s no strong evidence that I’ve seen that that won’t be the future that we end up in.
Cyber security is one of the world’s big problems. Surely the bad guys in the cyber security world will use smart interesting ways of counting, [or] artificial intelligence by another name, to enable them to do bad things. Society probably has two choices. We can just let them get the upper hand, or we can have artificial intelligence participate in protecting us. I think that microcosm is a useful way for us to think about what our choice is in society. The hackers are going to do it either way. Do we want to get our software system smarter and smarter about protecting us or not?
I think a lot of people have a vision of the future in which they wake up in the morning and shout to their virtual assistant to perform tasks for them — we already do this. Then a robot will drive them to work and fold their laundry at home at night. But the tradeoff is that we have to give up an immense amount of personal data in order to enable this. That makes some people very wary. Do you think this will still be an issue in the future?
I think that people will have a pretty wide spectrum of how much data they feel comfortable sharing, but I think it’s fair to say that even wealthy people who can afford a personal [human] assistant have a pretty wide spectrum on the kinds of stuff that they will share with their assistants. Some will tell their assistants everything, will give them the codes to their bank account and ask them to do a lot of things that other people are uncomfortable having a human assistant do.
If you don’t want to share your data, you shouldn’t have to share your data with a digital assistant. Simultaneously, I don’t think anyone should be made to feel bad if they want the benefits they can get from a digital assistant. Of course, the only way to get those benefits is for the digital assistant to have enough data to be able to help them. I would certainly hope that the future encompasses both of those perspectives and allows for both of those kinds of people to get what they want.
What does the job market look like once AI has started to displace some of the tasks that humans currently handle?
Technology has been displacing and creating jobs since there was technology. The lever, one of the first technologies that was ever created, allows one person to lift up something that before would have taken many people to lift up. That caused some people, from a narrow perspective, to lose their jobs.
It turns out that that hasn’t actually caused people to lose their jobs because people spend their time making levers and because it turned out that instead of one person moving the boulder that used to cost you 10 people… you would use that one person to move that one boulder [and] you have nine other people moving their own boulders. You now move 10 times as many boulders.
In other words, artificial intelligence is likely to cause some jobs to go away and is going to create a ton of new opportunity. In order to believe that all jobs are going to go away, which is a rather extreme view, but certainly one that some people are saying, I think you would have to believe that there’s an end to the problems in the world. That the problems are going to get all used up, taken up. That artificial intelligence will be so good we will run out of problems. I don’t believe that that’s going to happen.
It is a failure of imagination on our collective parts that we can’t see how, when robots take [over] some of what we’re currently doing, it won’t just level us up to the next level.
Project Wing, which is X’s drone project, recently tested food delivery by drone. Have you had a burrito delivered to you by drone?
How was it?
It was great. It was actually slightly magical. I think people over-focus on drones plus burritos. I guess I understand why they can over-fixate on that, but here’s how I would describe it.
Every time we have, as a society, as a species, removed another big chunk of the friction in how physical things are moved around in the physical world — boats, planes, trains, horses and the pony express, the mail system — [we have] profoundly changed society. It’s easy for us to see those things looking backwards because we’ve become used to not having the frictions that have been removed. We would never go back, but we’re very used to the remaining friction and how physical things are moved around in the physical world.
[Let’s say] you could just snap your fingers and have something magically appear in your hand whenever you wanted it [at] no cost, and it was instantaneous. You have a hammer in your home. You probably have a power drill. You use it one-10,000th of the time, maybe one-100,000th of the time. If that hammer was sitting in some central location, it could be shared by thousands of people, really safely, making everybody wealthier functionally because they would get the hammer when they need it without having to pay for the hammer and drain the world’s resources by making all of these hammers that go almost entirely unused.
You have a drawer full of batteries right now in your home, I guarantee you, that are discharging very slowly. Maybe you have a little ziplock bag full of them, because you never know for sure when you’re going to need one and what shape it’s going to need to be.
Because you don’t know and because it’s surprisingly inconvenient to go to CVS or Walgreens to get another battery, you just keep all of these batteries in your home that are slowly discharging, most of which will hit zero without you ever using them. You’re wasting the planet in a really dramatic way and the reason you’re doing it is because you can’t just snap your fingers and have that battery appear.
If we could move from an ownership society to an access society where having it now wasn’t important… [but] having it when you need it, it would really dramatically, magically, change the world.
So in the future, drones will be flying through the air overhead. We won’t own as many things because we’ll be sharing them. Unmanned aerial vehicles will essentially power the sharing economy and reduce our carbon footprint — all of this great stuff. What is the biggest challenge to achieving that right now?
I don’t want to minimize the challenges for the Wing project. We need to make things that can successful move long distances completely autonomously with very high levels of safety and reasonably inexpensively. That is not a solved problem.
You want to make sure that you don’t hit a power line, that if something goes wrong with one of your motors, that you can land elegantly instead of just crashing out of the sky. That when you get where you’re going, the system can cope thoughtfully with where to put the package down and possibly taking a package back or something else.
There are a lot of as yet unsolved problems.
Will residences in the future be designed with drone launch pads and landing pads? Will this actually factor into our design and the way we live?
I’ve actually already seen designs for skyscrapers that have little mini-heliports sticking outside the window so that UAVs can drop packages on people’s windows in high-rises or just outside their windows or maybe land and then someone can take the package right there. I’ve seen those designs, that’s something that we’re working on. I think it’s maybe a little premature to be building buildings like that, but good for them for starting to imagine them.
With Project Loon, X is trying to solve problems around internet access, both for the underserved and for people in the developed world who have internet access but want to fill in the gaps. Do you see any potential downside to constant connectivity?
I sometimes wish my kids would get off their phones, but [I’m not] one of the people who believes that society is being damaged by constant connectivity. I happen, actually, not to be a heavy user of social media. Maybe I’m just missing the addiction, so it doesn’t seem that bad to me, but I believe that this is not the first time that people have panicked that technology or other kinds of innovations in society were going to ruin society.
Rock n roll was heavily billed as destroying the youth of America and somehow the youth of America have survived rock n roll. I rather suspect they will survive Facebook as well.
What is the thing or the project that X is doing right now [that] you think will have the most impact on society in five to 10 years?
I’m not going to pick the favorite of my children. That’s not a winning proposition. But I’m going to give you my honest answer.
I hope that in the end, when we look back at X 10 years from now, 20 years from now, the process that I’ve described to you our attempt to systematize innovation, to get that balance of crazy optimism and really hard skepticism, married together and balanced [will be] just right.
If we can get that right enough and demonstrate enough times that we have at least somewhat systematized innovation, I’m hopeful that that will turn out to be the thing that has the biggest impact rather than any one of the projects that comes out of X.
So it’s X’s process, rather than a specific product.
What I’m saying is, what do you think had a bigger impact on the world? Henry Ford’s observation about how to make stuff, or the Model T? I believe that Henry Ford’s bigger impact on the world was the systematization of interchangeable parts in a factory setting. That was even bigger than some of the car-specific stuff that the factory built.
In the same way, while I’m very proud of some of the things that have already graduated from X — the self-driving cars, the life sciences project, the deep learning work called Google Brain that went back to Google. I’m very proud of a lot of the things that are currently being brewed here, I am hopeful that in the end the thing that… will have created the most value is the way we’re making them.
Will X be around in five to 10 years?
I don’t know, but you’re welcome to come back in five to 10 years and ask.
Will X last as long as Ford has? Ford’s been around for a long time.
X is certainly intended to go on into the future. Whether it does or not we’ll have to wait and see.
This interview has been edited and condensed
Editorial Lead: Michael Zelenko; Design: Frank Bi, Yuri Victor, James Bareham, William Joel, Georgia Cowley; Photography: James Bareham; Development: Frank Bi, Yuri Victor; Illustrations: Slanted Studios; Director: Miriam Nielsen; Director of Photography: Ian McAlpin; Sound Recording: Paul Dorough; Gaffer: Keith Cheng; Design and Animation: Lunar North; Executive Producer: Tre Shallowhorn; Creative Director: James Bareham; Motion Graphics Director: William Joel; Color: Max Jeffrey; Sound Design and Mixing: Andrew Marino.