Skip to main content

Google’s bus crash is changing the conversation around self-driving cars

Google’s bus crash is changing the conversation around self-driving cars

/

A more modest tone around autonomy emerges at SXSW 2016

Share this story

Amid the nonstop parties, panels, activations, and tacos, Google was dealing with a pronounced shadow hanging over its presence at SXSW this week: the company’s ambitious self-driving car program was responsible for its very first collision back in February. Now the fallout has found its way into nearly every transportation-focused panel discussion here in Austin.

Even with high-flying names like "Autonomous Vehicles Will Remake Cities" and "Autonomous Cars Will Make Us Better Humans," the tone at SXSW’s many forward-looking talks has been more subdued. Self-driving cars may be on the road today — in pilot programs in various sunny, fine-weathered locales. But the most optimistic of technologists are starting to acknowledge that the problem very well may take decades to crack.

The tone at SXSW's many forward-looking talks has been more subdued

"If you read the papers, you’re going to see that it’s maybe three years, maybe 30 years [before self-driving cars arrive]," Chris Urmson, the director of Google’s self-driving car project, said in a high-profile talk on Friday at the Austin Convention Center. "I think it’s a bit of both. This technology is almost certainly going to come out incrementally." It’s a step below the rhetoric he’s used in the past; in September of last year, for instance, Urmson said he hoped his 11-year-old son doesn’t have to get a driver’s license.

The process has already started, with Tesla’s controversial Autopilot and other Advanced Driver Assistance Systems (ADAS) designed for highway driving making their way to Honda, GM, and Mercedes vehicles over the next couple years. But Urmson, a strong advocate for the transformative power of autonomous vehicles, says there’s a long road ahead before the majority of cars can completely operate themselves.

Google’s software slipup will undoubtedly play a pivotal role in upcoming discussions about how to take the technology from relatively simple highway driving to full-blown autonomy. The company’s Lexus RX SUV struck a Mountain View public transit bus last month going only two miles per hour, due in part to a combination of incorrect assumptions on both the human and the software side. But Google’s car made the definitive choice to move, and the company has claimed responsibility for the hit. Though no one was injured, the collision has morphed into a symbolic moment for the industry that most saw coming: even Google, having notched 1.45 million miles without an at-fault crash, can’t claim its system is flawless quite yet.

Therein lies one of the core struggles with the self-driving car movement. Unlike smartphones, virtual reality, and basically any other big Silicon Valley-based push of the last decade, autonomous vehicles can’t be an ongoing experiment for consumers. We’ve become accustomed to beta tests and frequent software updates — using technology in its nascent stages and watching as it evolves over time. However, by the time everyday drivers begin taking their hands off the wheel and their foot off the pedal, the technology cannot afford to fail. If someone is killed because of an autonomous system before they’ve been adopted by the mainstream, it could destroy years of progress. (Notably, Tesla has beefed up the safety systems in its Autopilot technology, which it has explicitly billed as a beta.)

"These vehicles ... have to be virtually perfect."

"For there to be consumer acceptance of these vehicles, they have to be virtually perfect," David Strickland, a former Administrator of the National Highway Traffic Safety Administration (NHTSA), said in a self-driving panel discussion on Sunday.

Strickland oversaw the first policy statement from the NHTSA on autonomous testing on public roads in 2013, and he was speaking here at SXSW on a panel asking whether society is ready for autonomous vehicles — the answer, it turns out, is no. He cited a recent AAA study saying around 75 percent of drivers feel uneasy about using a fully self-driving car, while 60 percent felt comfortable with some form of minor self-driving assistance like automatic braking.

Achieving that "virtually perfect" performance is a tall order. As testing ramps up, there will also be an ever-changing mix of autonomous and human-operated vehicles on the road. Until a vast majority of cars in the US are governed by software — something that feels unfathomable for perhaps three to four more decades — there will be crashes between the predictable robots and their far less predictable human counterparts. Google has also made sure not to take its cars into situations they’re not ready for, like snow or rain. Earlier this year, Ford became the first US automaker to put its autonomous systems to the test on snowy roads, but it’s still early days for environments beyond sunny Mountain View or Austin.

ford snow self-driving

There are also more philosophical debates happening, including here at SXSW, that highlight the kind of head-scratching problems Silicon Valley and automakers will have to face down the road. Jennifer Haroon, the head of business operations for Google’s self-driving project, explained on the panel with Strickland how the Lexus that struck the bus did so in part because it was imitating human behavior.

"We had recently taught our vehicles to hug the right-hand side of that lane when it wanted to make a turn because that’s what a lot of people do," Haroon said. "It saw the bus, it saw the gap in the traffic, and it predicted the bus would allow it some room." The bus did not. If Google’s car had taken up more of the lane — as it would have done just a few weeks prior before the system update — it wouldn’t have been forced to maneuver its way back into traffic and risk hitting another driver.

Of course, if autonomous vehicles are not designed in the interim to imitate some human behavior, that causes trouble too. Humans tend to make assumptions about other drivers in a form of unspoken language, and those assumptions mean you can’t always follow the rules to the tee. "It’s vital for us to develop advanced skills that respect not just the letter of the traffic code but the spirit of the road," the company wrote in its February self-driving report.

Autonomous vehicles may need to imitate human behavior

It’s a delicate balance, but the ultimate goal remains: the reduction of automobile-related deaths, which surpassed 38,000 last year with millions more injured due to traffic accidents. To do that means developing a system that will have to change over time, taking into account both human behavior and how other autonomous systems react on the road. If a Ford self-driving car thinks the Honda self-driving car next to it is being driven by a human — and a third human driver doesn’t realize he’s driving among autonomous vehicles at all — the situation could become a thorny mess of mixed signals.

Although he described the crash on February 14th as a "tough day" for his team, Urmson says Google’s program will have to be prepared for more just like it if they’re to succeed in creating a system every driver can trust. "We’re going to have another day like our Valentine’s Day and we’re going to have worse days than that," he said. The end result, however, is worth the trouble, he argues. "I know that the net benefit will be better for society."