Skip to main content

Tesla didn’t fix an Autopilot problem for three years, and now another person is dead

Tesla didn’t fix an Autopilot problem for three years, and now another person is dead

/

Sizing up two fatal Tesla crashes and the questions they raise about Autopilot

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

Image: NTSB

On May 7th, 2016, a 40-year-old man named Joshua Brown was killed when his Tesla Model S sedan collided with a tractor-trailer that was crossing his path on US Highway 27A, near Williston, Florida. Nearly three years later, another Tesla owner, 50-year-old Jeremy Beren Banner, was also killed on a Florida highway under eerily similar circumstances: his Model 3 collided with a tractor-trailer that was crossing his path, shearing the roof off in the process.

There was another major similarity: both drivers were found by investigators to have been using Tesla’s advanced driver assist system Autopilot at the time of their respective crashes.

Autopilot is Level 2 semi-autonomous system, as described by the Society of Automotive Engineers, that combines adaptive cruise control, lane keep assist, self-parking, and, most recently, the ability to automatically change lanes. Tesla bills it as one of the safest systems on the road today, but the deaths of Brown and Banner raise questions about those claims and suggest that the Tesla has neglected to address a major weakness in its flagship technology.

There are some big differences between the two crashes

There are some big differences between the two crashes. For instance, Brown and Banner’s cars had completely different driver assistance technologies, although both are called Autopilot. The Autopilot in Brown’s Model S was based on technology supplied by Mobileye, an Israeli startup since acquired by Intel. Brown’s death was partly responsible for the two companies parting ways in 2016. Banner’s Model 3 was equipped with a second-generation version of Autopilot that Tesla developed in house. 

That suggests that Tesla had a chance to address this so-called “edge case,” or unusual circumstance, when redesigning Autopilot, but it has, so far, failed to do so. After Brown’s death, Tesla said its camera failed to recognize the white truck against a bright sky; the US National Highway Traffic Safety Administration (NHTSA) essentially found that Brown was not paying attention to the road and exonerated Tesla. It determined he set his car’s cruise control at 74 mph about two minutes before the crash, and he should have had at least seven seconds to notice the truck before crashing into it.

Tesla had a chance to address this so-called “edge case,” when redesigning Autopilot, but it has failed to do so

Federal investigators have yet to make a determination in Banner’s death. In a preliminary report released May 15th, the National Traffic Safety Board (NTSB) said that Banner engaged Autopilot about 10 seconds before the collision. “From less than 8 seconds before the crash to the time of impact, the vehicle did not detect the driver’s hands on the steering wheel,” NTSB said. The vehicle was traveling at 68 mph when it crashed.

In a statement, a Tesla spokesperson phrased it differently, changing the passive “the vehicle did not detect the driver’s hands on the steering wheel” to the more active “the driver immediately removed his hands from the wheel.” The spokesperson did not respond to follow-up questions about what the company has done to address this problem.

In the past, Tesla CEO Elon Musk has blamed crashes involving Autopilot on driver overconfidence. “When there is a serious accident it is almost always, in fact maybe always, the case that it is an experienced user, and the issue is more one of complacency,” Musk said last year

The latest crash comes at a time when Musk is touting Tesla’s plans to deploy a fleet of autonomous taxis in 2020. “A year from now, we’ll have over a million cars with full self-driving, software, everything,” he said at a recent “Autonomy Day” event for investors.

Those plans will be futile if federal regulators decide to crack down on Autopilot. Consumer advocates are calling on the government to open up an investigation into the advanced driver assist system. “Either Autopilot can’t see the broad side of an 18-wheeler, or it can’t react safely to it,” David Friedman, vice president of advocacy for Consumer Reports, said in a statement. “This system can’t dependably navigate common road situations on its own and fails to keep the driver engaged exactly when needed most.”

“Either Autopilot can’t see the broad side of an 18-wheeler, or it can’t react safely to it”

Car safety experts note that adaptive cruise control systems like Autopilot rely mostly on radar to avoid hitting other vehicles on the road. Radar is good at detecting moving objects but not stationary objects. It also has difficulty detecting objects like a vehicle crossing the road not moving in the car’s direction of travel.

Radar outputs of detected objects are sometimes ignored by the vehicle’s software to deal with the generation of “false positives,” said Raj Rajkumar, an electrical and computer engineering professor at Carnegie Mellon University. Without these, the radar would “see” an overpass and report that as an obstacle, causing the vehicle to slam on the brakes. 

On the computer vision side of the equation, the algorithms using the camera output need to be trained to detect trucks that are perpendicular to the direction of the vehicle, he added. In most road situations, there are vehicles to the front, back, and to the side, but a perpendicular vehicle is much less common. 

“Essentially, the same incident repeats after three years,” Rajkumar said. “This seems to indicate that these two problems have still not been addressed.” Machine learning and artificial intelligence have inherent limitations. If sensors “see” what they have never or seldom seen before, they do not know how to handle those situations. “Tesla is not handling the well-known limitations of AI,” he added.

Tesla has not yet explained in detail how it intends to fix this problem. The company releases a quarterly safety report about the safety of Autopilot, but that report is short on details. That means experts in the research community don’t have hard data that would allow them to compare the effectiveness of Autopilot to other systems. Only Tesla has 100 percent understanding of Autopilot’s logic and source code, and it guards those secrets closely.

“We need detailed exposure data related to when, where, and what conditions drivers are leveraging Autopilot,” said Bryan Reimer, a research scientist in the MIT Center for Transportation and Logistics, in an email to The Verge, “so that we can begin to better quantify the risk with respect to other vehicles of a similar age and class.”

Other Tesla owners have spoken out about Autopilot’s problem of perceiving trucks in the vehicle’s path. An anonymous Twitter user who uses the handle @greentheonly “hacked” a Model X and posts observations on Twitter and YouTube. They did this to “observe Autopilot from the inside,” they said in an email to The Verge. In March, their Model X encountered a tractor-trailer perpendicular to their path, similar to both Brown and Banner. The vehicle would have tried to drive underneath the truck had the driver not intervened.

According to @greentheonly’s data, the semi was not marked as an obstacle. But they decided not to tempt fate: “I did not try to approach the trailer and see if any of the inputs would change (but I bet not).”