Skip to main content

Google's self-driving cars have been in 11 accidents, but none were the car's fault

Google's self-driving cars have been in 11 accidents, but none were the car's fault

Share this story

Shortly after the release of an AP report asserting that Google's self-driving cars have been involved in four accidents since last September, the company published a post on Backchannel today — the Medium-based publication from former Wired writer Steven Levy — to dive into more detail on all the accidents it has experienced with the project since it first took to the streets six years ago.

First, the raw numbers: there have been 11 accidents in total, all minor, which Google asserts were never the fault of the car. Seven involved another vehicle rear-ending the Google car, two were sideswipes, and one involved another car traveling through a red light. In all, Google's post emphasizes two things: one, its sensors and algorithms are statistically far more attentive and less error-prone than a human driver is; and two, the error-prone behavior of the humans around it are feeding into better algorithms, making the Google car even safer than it already was.

Google car algo

An example of a dumb thing a human driver (in purple) did near a Google car, cutting it off to make a right turn.

All told, it's a pure PR play by Google to spin the accidents in a way that makes its self-driving cars look good — and indeed, assuming the statistics are correct, they do look good. It's an especially smart message to get out into the world right now, as city, state, and federal governments struggle to wrangle the nightmarish web of regulation that will be required to make truly autonomous vehicles road-legal. (Daimler drove that message home with its self-driving truck last week, too.)

So yes: Google — and other companies who've been working on this technology — have demonstrated that the technology exist to prevent these cars from hitting people, other cars, bicyclists, and to generally make sure they're not doing stupid things on the road.

Sometimes you need a human to be unsafe on the computer's behalf

The problem, though, is when they're too safe. Sometimes you need a human to be unsafe on the computer's behalf.

Ford first brought this issue to my attention in a recent conversation with Mike Tinskey, the company's head of electrification and infrastructure. Besides expanding the network of chargers and technologies that are necessary to support the widespread EVs that are inevitably coming down the pipeline, Tinskey shares responsibility for many of the projects in Ford's "Smart Mobility" initiatives — a wide-ranging series of projects introduced at CES this year that involve using cars in unusual, non-traditional ways: ride-sharing, car-sharing, and so on.

One of these Smart Mobility projects is called "Remote Repositioning," which allows an individual seated at a computer to remotely drive a vehicle that's potentially thousands of miles away, using nothing more than an LTE connection and a few cameras and sensors. There are a variety of potential uses for it — remote valets, for instance — but another example Tinskey brought up was that of overcoming the excessively careful self-driving systems of the future:

So you're saying that from the driver's perspective, the car will be self-driving, but really there's someone else driving it from afar for them?

That's right. If you've ever had the pleasure to go to, for instance, China, if you're not aggressive to try to turn left, there will be people that will walk in front of you all day long. And an autonomous vehicle would end up sitting there forever. And a driver normally just has to kind of say, "Alright, I'm going," and the people will stop and the car heads through. So there are going to be situations where a remote driver can actually pilot a vehicle better than an autonomous in certain conditions. Or just because of policy, that might be the way that we have to deal with it.

Indeed, in Google's view of vehicle autonomy — as in the generally rational view — a car can never assume (or hope, at least) that a pedestrian will stop or jump out of the way the way a human driver can. Sometimes, simply moving (particularly in the world's most congested cities) requires a degree of cowboyishness that a stupidity-proof autonomous car can never permit. There needs to be a way for the car to say, "well, I can't make this potentially dumb decision, but I invite a human to make it for me."

Google's self-driving research, and the growing PR campaign that surrounds it, are hyper-focused on eliminating as many dangers as they possibly can. But in the process, they risk compromising the very human realities that allow cars to move in the first place. (Coincidentally, Google notes that a majority of its accidents have taken place in urban environments, not rural ones.) It's this last mile of research — the interaction between autonomous vehicles and the urban jungles that will increasingly surround them — that promises to be the most interesting in the years to come.

Verge Video archive: Why Google's new self-driving cars could be the safest on the road (2014)