This year has been full of big news about the progress of self-driving cars. They are currently street legal in three states and Google says that on a given day, they have a dozen autonomous cars on the road. This August, they passed 300,000 driver-hours. In Spain this summer, Volvo drove a convoy of three cars through 200 kilometers of desert highway with just one driver and a police escort. Cadillac's newest models park themselves. The writing, one might think, is on the wall. But objects in the media may be farther off than they appear.
There's a wide gap between having a prototype and going to market, and it's particularly gaping for anything with a combustion engine. The law has a lot to say about cars, especially about who’s allowed to drive them, and answering all the legal questions could easily take the rest of the decade. For instance, when a self-driving car gets in a fender bender, who's liable for the damages? Should a computer choose to hit an animal or swerve off the road? How does the DMV give a robot an eye exam?
As the technical limitations fall away, these legal questions are becoming the self-driving car's biggest challenge. Unfortunately for Google, the solutions will have to come from lawyers and legislators rather than engineers.
Who gets sued? Or for a shorter list, who doesn't?
Bryant Walker Smith teaches a class on autonomous vehicles at Stanford Law School. At a workshop this summer, he put forward this thought experiment: the year is 2020, and a number of companies offer "advanced driver assistance systems" with their high-end model. Over 100,000 units have been sold. The owner's manual states that the driver must remain alert at all times, but one night a driver — we'll call him "Paul" — falls asleep while driving over a foggy bridge. The car tries to rouse him with alarms and vibrations but he's a deep sleeper, so the car turns on the hazard lights and pulls over to the side of the road where another driver (let's say Julie) rear-ends him. He's injured, angry, and prone to litigation. So is Julie.
That would be tricky enough by itself, but then Smith starts layering on complications. Another model of auto-driver would have driven to the end of the bridge before pulling over. If Paul had updated his software, it would have braced his seatbelt for the crash, mitigating his injuries, but he didn't. The company could have pushed the update automatically, but management chose not to. Now, Smith asks the workshop, who gets sued? Or for a shorter list, who doesn't?


This is the nitty gritty of automotive law, not just the rules of who gets on the road but the web of regulations and statutes that decide what happens once you're there. For automated drivers, most of these rules have yet to be written, and they'll need to be handled extremely delicately. If the liability laws are too punitive towards driver bots, letting Paul and Julie join in a suit against the self-driving-tech developer, then companies might avoid the sector entirely. On the other hand, if the laws leave car-owners on the hook for anything the new gadgets do, consumers may be scared away from buying them. There's a balance to be struck, but it will have to be made across multiple courts and stand up to countless civil challenges.
The financial stakes are high. According to the Insurance Research Council, auto liability claims paid out roughly $215 for each insured car, between bodily injury and property damage claims. With 250 million cars on the road, that's $54 billion a year in liability. If even a tiny portion of those lawsuits are directed towards technologists, the business would become unprofitable fast.
Florida, Nevada and California have all passed laws to make the cars street legal, thanks in large part to big lobbying efforts by Google, but according to Professor Smith, those bills only scratch the surface. "They don't really resolve the human driver's obligations behind the wheel," Smith told The Verge. "They don't really provide standards of performance for these vehicles." That's fine if all you want is to test out a fleet of prototypes on public highways, but changing the way we drive is going to require a lot more compromise. That could mean a radically scaled-down vision of the project, or a legal struggle that keeps the Google Car in prototype limbo for the rest of the decade. Either way, they've got a long road ahead of them.
The good news is, Google isn't alone. In Europe, a simpler version of the tech is already going through the same growing pains, struggling to iron out the legal difficulties on the way from prototype to product. It's called the SARTRE Project, a joint research mission by Ricardo UK, Volvo, the European union and half a dozen smaller research groups. Instead of giving up the controls entirely to a computer, SARTRE works on the convoy model, with a single truck leading the way and up to five cars following behind, controlled by the lead truck. Because they're all taking the same orders, the cars can travel just a few meters apart, cutting down greatly on wind resistance. (Also known as "drafting," to racing fans.) If adopted, SARTRE would raise fuel efficiency by up to 20% and fit three times as many cars in a single lane.
The technical challenges are much simpler. All the cars are directed by a human driver in the lead truck, so SARTRE doesn't have to bother with Google's sensors or decision-making programs. Most of the hardware required already exists. Each car would be equipped with a small device (a WiFi router, essentially) and pass the directives from the box, through the car's CPU and to the throttle, brake and steering systems, a manageable task with today's fleet of computer-enabled vehicles. The necessary cameras and proximity sensors have also become standard for higher-end cars. What's left is just testing and nailing down the software protocols. This summer, SARTRE made proof-of-concept runs down Spanish highways. In September, the research funding stopped and the search began for a company that could move the project to market. They're still looking.
Anyone who gets into the convoy business will have a lot of legal work ahead of them
Anyone who gets into the convoy business will have a lot of legal work ahead of them. At the moment, the technology isn't legal in any market in the world, thanks to the close following distances involved. For a convoy of remote-controlled vehicles, those kind of distances are safe, but our human-oriented laws still consider them out of bounds. Changing the laws in Europe would take a replay of the internationally ratified Vienna Convention (passed in 1968) as well as pushing through a hodgepodge of national and regional laws. As Google proved, it's not impossible, but it leaves SARTRE facing an unusually tricky adoption problem. Lawmakers won't care about the project unless they think consumers really want it, but it's hard to get consumers excited about a product that doesn't exist yet. Projects like this usually rely on a core of early adopters to demonstrate their usefulness — a hard enough task, as most startups can tell you — but in this case, SARTRE has to bring auto regulators along for the ride. Optimistically, Volvo told us they expect the technology to be ready "towards the end of this decade," but that may depend entirely on how quickly the law moves.



The less optimistic prediction is that it never arrives at all. Steve Shladover is the program manager of mobility at California's PATH program, where they've been trying to make convoy technology happen for 25 years, lured by the prospect of fitting three times as many cars on the freeway. They were showing off a working version as early as 1997 (powered by a single Pentium processor), before falling into the same gap between prototype and final product. "It's a solvable problem once people can see the benefits," he told The Verge, "but I think a lot of the current activity is wildly optimistic in terms of what can be achieved."
When I asked him when we'd see a self-driving car, Shladover told me what he says at the many auto conferences he's been to: "I don't expect to see the fully-automated, autonomous vehicle out on the road in the lifetime of anyone in this room."
Many of Google's planned features may simply never be legal. One difficult feature is the "come pick me up" button that Larry Page has pushed as a solution to parking congestion. Instead of wasting energy and space on urban parking lots, why not have cars drop us off and then drive themselves to park somewhere more remote, like an automated valet?
It's a genuinely good idea, and one Google seems passionate about, but it's extremely difficult to square with most vehicle codes. The Geneva Convention on Road Traffic (1949) requires that drivers "shall at all times be able to control their vehicles," and provisions against reckless driving usually require "the conscious and intentional operation of a motor vehicle." Some of that is simple semantics, but other concerns are harder to dismiss. After a crash, drivers are legally obligated to stop and help the injured — a difficult task if there's no one in the car. As a result, most experts predict drivers will be legally required to have a person in the car at all times, ready to take over if the automatic system fails. If they're right, the self-parking car may never be legal.
It doesn't stop there. In November, Bryant Walker Smith wrote a 100-page document detailing all the criminal challenges a self-driving cars would face, aptly titled, "Automated Vehicles are Probably Legal in the United States." Many of Smith's recommendations are small legislative tweaks, like adapting the term "driver" to include computers without conventional eyes or ears. But once the "driver" can be a computer, Smith puzzles over what to make of a phrase like "the driver shall exercise due care." The laws are full of terms like "prudent" and "reasonable" that make sense for humans, but become frustratingly vague once you're trying to convert them to code. Even something as simple as a speed limit fluctuates according to road conditions and human assessments of risk. "65mph" usually means 70, unless it's pouring rain, in which case it means 50. As long as there's a human at the wheel, we ask them to know the law and use good judgment — but it's hard to say what that means for an automated system.
Many of Google's planned features may simply never be legal

However dangerous an automated car would be, human drivers are almost certainly worse
However dangerous an automated car would be, human drivers are almost certainly worse. 32,367 people died in auto collisions last year, an average of 88 each day. Google's promise is to bring that number to zero. That's an ambitious, important goal, potentially the greatest leap in auto safety in 40 years, and it's easy to feel Google's frustration seeing it tied up with 50-year-old laws and the international equivalent of the DMV.
In the meantime, they're stuck in the same limbo as SARTRE, trying to show off a developing product without breaking any laws. Even the safety claims have come under fire recently, given how new the technology is. As Shladover points out, human drivers average one fatal crash for every three million driver-hours, which may not be as bad as we think. Making a computer that goes three million hours between failures is a difficult task. Google's driver-bots have only logged a fraction of that, so it may be too early to brag about their safety record.
BMW is testing a "highly automated driving mode" on German highways.

In the meantime, what we get is incremental changes. Want a car that stays in its highway lane automatically? Easy. Want a car that knows how to parallel park? It's already here. But the minute you ask it to drive for you, you're asking the to take over a set of moral and legal responsibilities that, for now, only a human being can shoulder. Without deep changes to the way we think about driving, it may be impossible.