Skip to main content

Self-driving cars need a common language to talk about safety, or they will fail

Self-driving cars need a common language to talk about safety, or they will fail


‘The success of autonomous vehicles requires public trust’

Share this story

Illustration by William Joel / The Verge

There’s been a lot of talk lately about the need for a “common language” when it comes to self-driving cars. Ford recently came out in favor of standardized visual cues that autonomous vehicles could use to communicate intent to pedestrians, bicyclists, and other drivers. Meanwhile, critics continue to assail the five levels of automation as defined by the Society of Automotive Engineers, the global standard for self-driving, for being overly broad and possibly dangerous. Most experts agree: we need a better, more unified way to talk about self-driving cars.

Today, the RAND Corporation unveiled its own well-researched attempt to introduce a common language for autonomous vehicles. Titled “Measuring Automated Vehicle Safety: Forging a Framework,” the 91-page document seeks to answer the burning question: can fierce rivals find common ways to measure safety that would be helpful to the public?

we need a better, more unified way to talk about self-driving cars

After all, that is the core obstacle to any effort to standardize anything in the self-driving space. Companies like Waymo, Tesla, GM, Ford, and Uber would sooner sue the competition into oblivion than gather round the campfire and sing kumbaya. These companies have invested billions of dollars in research and development ($80 billion, according to the Brookings Institute), in the hopes of reaping the rewards of a potential $7 trillion industry. Why should they agree to anything that could level the playing field for their competitors and eliminate their own advantages?

For Marjory Blumenthal, senior policy analyst at RAND and lead author of the report, the answer is pretty simple: there won’t be any self-driving cars if people don’t feel safe enough to ride in them. “There’s not the greatest degree of transparency,” Blumenthal told The Verge. “So it seems like it’s a good time to provide a way so that companies could be encouraged to find some commonality in the way they talk about how and why their vehicles are safe.”

The number of autonomous vehicles available to the public today is infinitesimal — there are only a handful of public trials going on in the US, Europe, Russia, and China — but the public is growing increasingly skeptical of this new technology. In March, a self-driving Uber vehicle struck and killed a pedestrian in Tempe, Arizona, while the backup safety driver was streaming a video on her phone, police said. Uber suspended testing in the aftermath, and some safety advocates said the crash showed the system was not yet safe enough to be tested on public roads.

“The success of autonomous vehicles requires public trust,” Blumenthal said. “And right now, autonomous vehicle development is happening along different paths, and so having a common reference point can help the development community move toward safer vehicles and promote that public trust.”

“right now, autonomous vehicle development is happening along different paths”

Ironically, RAND took on the task of creating a shared language for self-driving cars at the request of Uber’s Advanced Technology Group, which operates the ride-hailing giant’s AV fleet. The company approached RAND in the summer of 2017, almost a year before the fatal Tempe crash, with the request to develop a company-neutral framework for AV safety. Blumenthal and her team set out to talk to a wide array of stakeholders, including engineers at Tesla, Waymo, and Toyota, as well as researchers, public safety advocates, and government officials.

RAND starts out by defining the three stages in the life cycle of self-driving cars: development, demonstration, and deployment. It also considers safety measurements such as crashes, infractions (like running a red light), and a new measure called “roadmanship,” which measures if the vehicle is a “good citizen” of the roadway (e.g., plays well with others). A formal definition of roadmanship is needed before AVs are tested in public, RAND recommends.

Other considerations include where the safety measurements were taken — in simulation, on a closed course or proving ground, or out in the wild, on public roads with or without a safety driver. The “operational domain design” of self-driving cars can also take into account a variety of external conditions, such as geography, weather, lighting, road markings, and other factors.

Uber self-driving crash screen cap
Image: ABC 15

Throughout its report, RAND gently chides AV companies for the way they talk about self-driving cars in utopian terms. “Unrealistic claims of near perfection” can warp the public’s perception about what AVs can and cannot accomplish. Claims that mass adoption of AVs can lower the number of annual motor vehicle deaths can be undone by even a single crash. We saw this with the Uber crash in March, after which public support for AVs dropped precipitously.

The federal government is taking a backseat to self-driving cars, rewriting its own rules to incentivize their deployment and basically passing the buck to the states in terms of regulation and enforcement. As such, RAND suggests that local DMVs may want to play a larger role in formalizing the demonstration process, much like California does by requiring licenses to test AVs on public roads.

RAND also recommends more data-sharing between companies and with government agencies — a suggestion that is sure to be met with silence from the private sector. Companies are reluctant to publicize their data for fear of exposing important trade secrets. But Blumenthal and her team are optimistic. “There is hope of more collective action among competitors,” the report concludes, “what some might call coopetition.”