As Google has expanded throughout Mountain View over the past decade, its repurposed office buildings have developed a certain sameness: perfectly mundane exteriors, accented by the multicolored Google bikes scattered outside. Building RLS2 is different: it was previously known as the Mayfield Mall, and for that reason, its roof is an enormous multilevel parking lot. On this day, there isn’t a car in sight — and if you live in the Bay Area, where parking spaces are conserved as zealously as water in Mad Max: Fury Road, it looks a little like the promised land.
And then something comes into view from across the lot. Something small and purposeful, moving toward you at an eminently reasonable speed. Its movements are self-assured and weirdly dignified, like those of a show horse. It sidles up alongside you, and a person opens the door and invites you in. This is Google’s latest self-driving car prototype. It has no driver, no steering wheel, and no foot brakes. Instead it has a central console with a big black button. Press it, and it drives itself.
I traveled to Building RLS2 on Tuesday to attend the second annual media day for Google’s Self-Driving Car Project. Media day is part of Google’s carefully coordinated, multi-year public relations campaign on behalf of autonomous vehicles. It was led by John Krafcik, former CEO of Hyundai America, who took over as the project’s CEO earlier this month. Krafcik introduced a series of speakers including Chris Urmson, the project’s longtime director, and Jamie Waydo, the lead systems engineer.
Many of the biggest questions about Google’s autonomous vehicles remain unanswered. Who will manufacture them? When will they be ready for production? Are they ever going to figure out how to operate in the rain? But these are mostly questions not of technology but of time. The age of autonomous vehicles is dawning, and quickly. For the 1.2 million people who will die in auto accidents worldwide this year, it can’t come quickly enough.
"I've been really happy about the progress we've made."
In a surprise, Google co-founder Sergey Brin stopped by the day’s proceedings and took questions from reporters eager to understand how Google would overcome regulatory obstacles and bring its cars to market. "In the past year or so, I’ve been really happy about the progress we’ve made," said Brin, looking relaxed in black shorts, a long-sleeved blue shirt, and a beat-up pair of Crocs. "And I think that the potential for cars to change the ways communities work, to give access to a lot of people who are underserved by transportation today — I think that day is coming closer, and I’m super excited by it."
Speakers recounted the history of the program, explained the technology behind autonomous driving, and covered the safety features of its latest prototype in numbing detail. The prototype’s sensors allow it to see 200 meters in all directions, eliminating blind spots. It’s programmed to drive defensively, actively avoiding other drivers’ blind spots and easing away from big rigs and motorcycles that are splitting lanes. As in an airplane, its critical systems are redundant, providing backup for steering and braking. And the prototype also has safety baked into its materials: the windshield is flexible, and the front end is made out of custom foam.
But the main event at media day was a ride in the latest prototype, the charming little two-seater that has been compared to everything from a gumdrop to Flappy Bird. (OK technically I am the one who compared it to Flappy Bird, but I was right, and also those 10 retweets don’t lie.) The point is that the thing is cute, a word I am doing my best to use in a descriptive rather than critical sense. It is designed to be adorable, so as to appear more trustworthy.
After a Q&A with the team, we were escorted to the roof of the mall. What Google didn’t tell us ahead of time was that the prototype was about to navigate us through an obstacle course. During an all-too-brief five-minute demonstration, the car would have to stop for a pedestrian darting out into the street, slow for a car that cut suddenly in front of it, and allow a cyclist in its path to make a left turn without hitting him. But first, we had to start the car.
First, we had to start the car
My fellow passenger in the prototype was Alexis Madrigal, the editor in chief of Fusion.net and my sworn nemesis in the content wars. He sat in the left seat, and I took the right. (Google would not allow us to photograph the car’s interior.) The first thing I noticed, as a man who stands at 6’5", is how much legroom I had. The lack of a steering wheel was jarring at first, but I was happy to stretch my legs out.
Once you’ve buckled your seat belt, you can focus on the shimmering blue display in front of you. It tells you what the car is seeing in real time: other cars, pedestrians, cyclists, and hundreds of other objects and gestures. The display is the foundation of trust between you and the car: it constantly reassures you that it knows where it is, where it’s going, and what dangers it is avoiding.
Between Alexis and me sat a center console with a handful of control buttons. The most important of these is the big black "start" button, which I allowed Alexis to press because I am a gracious person. A countdown appeared on the display: 3 … 2 … 1! And then we were off, in the gentlest and least alarming way possible. "Alexis," I said, drawing on all I had learned that day, "this car is fucking driving itself."
"This car is fucking driving itself."
The prototype’s first test was a Google employee playing the role of a pedestrian. As we motored through the parking lot, he walked in the path of the oncoming vehicle. Instantly, the prototype’s display showed an animated white figure of a pedestrian, and the car slowed to let him pass. Both pedestrian and car were moving slowly enough that it didn’t quite feel quite like real-world conditions. But asking Google to make the demo more dangerous for our amusement seemed rather unfair to the pedestrian.
We cruised around to the other side of the parking garage, and a white sedan came up a ramp and merged in front of us. Once again, the display showed us that it had identified the car and slowed immediately. It waited until there was a safe distance between us and the sedan, and then returned to its normal trot. The final challenge was a cyclist who cut across the prototype’s path; once again, it spotted the cyclist, understood the hand gesture for "left turn," and slowed until he passed.
As our ride concluded, I still had plenty of questions. How does the car react when there are multiple pedestrians and cyclists all weaving in front of it? How will this feel when the car is moving at much higher speeds (the prototype tops out at 25 mph)? But my overwhelming wish was simply to take another ride. I got out and watched the car navigate the obstacle course three more times, always dodging the pedestrian successfully with no trouble whatsoever. I imagined the day, hopefully not long from now, when I can order one of these cars from my phone, and take it to work or home from a bar.
Instead I got back in the car that I still have to drive myself, and drove back to San Francisco. I made it back intact, despite the fact that I was often traveling at more than 70 mph, and felt the need to switch Spotify playlists about 14 times during a 45-minute drive. Catching myself paying attention to the road with one eye only, I thought to myself, not for the first time, that the most dangerous thing about my car was undoubtedly me.
Photography by Peter Prato for The Verge.