Skip to main content

Google's self-driving cars would've hit something 13 times if not for humans

Google's self-driving cars would've hit something 13 times if not for humans


New report shows when and why test drivers have to take the wheel

Share this story

In the future, self-driving cars may allow us to ignore the road completely, but in the present, humans still need to be ready to take the wheel at a moment's notice. Google reports that in California between September 2014 and November 2015, test drivers for its fleet of autonomous vehicles took over control from the computer 341 times — an event known as a "disengagement." Google and other companies testing self-driving cars in California have to report these figures annually to the state's Department of Motor Vehicles, and the results are fascinating, if a little confusing at times.

Google says these takeover events help improve the technology

Of the 341 total disengagements, 272 were due to the "failure of autonomous technology" — where the car's computers detected a fault of some sort, and handed over control to the human, signaling that a takeover was needed with a "distinct audio and visual signal." Google points out, though, that the company's objective is not necessarily to minimize the number of disengagements, but to "gather as much data as possible to enable us to improve our self-driving system." For this reason, the thresholds for this sort of takeover are "set conservatively," and include what might be relatively minor "anomalies in sensor readings" (as well as more serious problems).

The 69 other disengagements, though, are more critical. These are events where the "safe operation of the vehicle requires control by the driver" — that is, when the test driver decided they needed to grab the wheel for some reason. These aren't all potential crashes. Google notes that the reason for humans deciding to take over (rather than being prompted to) can include bad driving from other road-users or decisions "relating to comfort."

Google's simulations produced 13 potential crashes

Following these disengagements, Google takes the data from its cars' sensors and feeds them into a simulator, working out what would have happened if the driver hadn't taken control. Through these tests, the company calculated that 13 of these 69 manual disengagements would have resulted in contact of some sort with another road-user or (in two of the 13 cases) with a traffic cone. In 10 of these 13 events Google says its own technology was at fault, and in the remaining three, other drivers were to blame.

These 341 disengagements took place during 424,331 miles of driving, with Google noting that its self-driving fleet drives as much in a month as two to four years of typical US adult driving (that's around 30,000 to 40,000 miles, says the company). It also notes that the miles driven per disengagement have been going up, from 785 miles per disengagement in the fourth quarter of 2014, to 5,318 miles per disengagement in the fourth quarter of 2015. (Although this is only for "failure of autonomous technology" events — the rate has gone up for driver-initiated takeovers.)

disengagement events per miles
Autonomous miles driven per disengagement for technology failures. (Google)

These figures will need to continue to improve if Google wants to meet its target of putting its self-driving cars in the hands of everyday people on public roads by 2020. Assuming that these 13 "simulated contacts" would have resulted in real life crashes over the 424,331 miles driven, then Google's cars averaged around 30 crashes per million miles driven. By comparison, a preliminary study from the University of Michigan Transportation Research Institute estimated that conventional cars crash 1.9 times per million miles. The same study noted that crash severity was much lower for self-driving cars, and that autonomous vehicles have been involved in zero fatalities. Google previously noted in May last year that its self-driving cars have been in 11 real accidents (rather than simulated ones), but that these were all minor and all the fault of other drivers. (We should also note that calculations about crash rates tend to vary because of the challenges like unreported crashes. Self-driving vehicles are far more scrupulous than normal drivers.)

Other manufacturers' reports were less detailed

And how did other self-driving car manufacturers stack up? Well, they were less detailed for a start. Nissan says its four self-driving vehicles racked up 106 disengagements over 1,485 miles of driving (the majority of which took place when the autonomous driving system failed); Mercedes-Benz reported 1,031 disengagements for 1,738 miles of driving for two cars (the noted reason is mostly either "technology evaluation management" or "driver was uncomfortable"); while Bosch reported as many as 126 disengagements in a month's driving of 92.5 miles, but claimed that every instance was a "planned test of technology." Tesla's single-page report simply said there were "Zero (0) autonomous mode disengagements," although it doesn't make any mention of how many miles were driven. You can read the other reports here.

As this hodgepodge of figures suggests, it's impossible to draw direct comparisons between different companies' self-driving vehicles, or try and work out whose technology is "best." (The lack of standardized reports or, indeed, driving conditions takes care of this. Which company stress-tested their cars, for example; which went easy on them?) However, two things are clear from these and previous reports: firstly, that when today’s self-driving cars are involved in accidents, they tend to be minor; and secondly, that they still need a human to step in when things go wrong. It's not time to fall asleep at the wheel just yet.

Self-driving cars Delphi's smarter car