Skip to main content

Self-driving cars are headed toward an AI roadblock

Skeptics say full autonomy could be farther away than the industry admits

Share this story

If you believe the CEOs, a fully autonomous car could be only months away. In 2015, Elon Musk predicted a fully autonomous Tesla by 2018; so did Google. Delphi and MobileEye’s Level 4 system is currently slated for 2019, the same year Nutonomy plans to deploy thousands of driverless taxis on the streets of Singapore. GM will put a fully autonomous car into production in 2019, with no steering wheel or ability for drivers to intervene. There’s real money behind these predictions, bets made on the assumption that the software will be able to catch up to the hype.

On its face, full autonomy seems closer than ever. Waymo is already testing cars on limited-but-public roads in Arizona. Tesla and a host of other imitators already sell a limited form of Autopilot, counting on drivers to intervene if anything unexpected happens. There have been a few crashes, some deadly, but as long as the systems keep improving, the logic goes, we can’t be that far from not having to intervene at all.

But the dream of a fully autonomous car may be further than we realize. There’s growing concern among AI experts that it may be years, if not decades, before self-driving systems can reliably avoid accidents. As self-trained systems grapple with the chaos of the real world, experts like NYU’s Gary Marcus are bracing for a painful recalibration in expectations, a correction sometimes called “AI winter.” That delay could have disastrous consequences for companies banking on self-driving technology, putting full autonomy out of reach for an entire generation.

“Driverless cars are like a scientific experiment where we don’t know the answer”

It’s easy to see why car companies are optimistic about autonomy. Over the past ten years, deep learning — a method that uses layered machine-learning algorithms to extract structured information from massive data sets — has driven almost unthinkable progress in AI and the tech industry. It powers Google Search, the Facebook News Feed, conversational speech-to-text algorithms, and champion Go-playing systems. Outside the internet, we use deep learning to detect earthquakes, predict heart disease, and flag suspicious behavior on a camera feed, along with countless other innovations that would have been impossible otherwise.

But deep learning requires massive amounts of training data to work properly, incorporating nearly every scenario the algorithm will encounter. Systems like Google Images, for instance, are great at recognizing animals as long as they have training data to show them what each animal looks like. Marcus describes this kind of task as “interpolation,” taking a survey of all the images labeled “ocelot” and deciding whether the new picture belongs in the group.

Engineers can get creative in where the data comes from and how it’s structured, but it places a hard limit on how far a given algorithm can reach. The same algorithm can’t recognize an ocelot unless it’s seen thousands of pictures of an ocelot — even if it’s seen pictures of housecats and jaguars, and knows ocelots are somewhere in between. That process, called “generalization,” requires a different set of skills.

For a long time, researchers thought they could improve generalization skills with the right algorithms, but recent research has shown that conventional deep learning is even worse at generalizing than we thought. One study found that conventional deep learning systems have a hard time even generalizing across different frames of a video, labeling the same polar bear as a baboon, mongoose, or weasel depending on minor shifts in the background. With each classification based on hundreds of factors in aggregate, even small changes to pictures can completely change the system’s judgment, something other researchers have taken advantage of in adversarial data sets.

Marcus points to the chat bot craze as the most recent example of hype running up against the generalization problem. “We were promised chat bots in 2015,” he says, “but they’re not any good because it’s not just a matter of collecting data.” When you’re talking to a person online, you don’t just want them to rehash earlier conversations. You want them to respond to what you’re saying, drawing on broader conversational skills to produce a response that’s unique to you. Deep learning just couldn’t make that kind of chat bot. Once the initial hype faded, companies lost faith in their chat bot projects, and there are very few still in active development.

That leaves Tesla and other autonomy companies with a scary question: Will self-driving cars keep getting better, like image search, voice recognition, and the other AI success stories? Or will they run into the generalization problem like chat bots? Is autonomy an interpolation problem or a generalization problem? How unpredictable is driving, really?

It may be too early to know. “Driverless cars are like a scientific experiment where we don’t know the answer,” Marcus says. We’ve never been able to automate driving at this level before, so we don’t know what kind of task it is. To the extent that it’s about identifying familiar objects and following rules, existing technologies should be up to the task. But Marcus worries that driving well in accident-prone scenarios may be more complicated than the industry wants to admit. “To the extent that surprising new things happen, it’s not a good thing for deep learning.”

“Safety isn’t just about the quality of the AI technology”

The experimental data we have comes from public accident reports, each of which offers some unusual wrinkle. A fatal 2016 crash saw a Model S drive full speed into the rear portion of a white tractor trailer, confused by the high ride height of the trailer and bright reflection of the sun. In March, a self-driving Uber crash killed a woman pushing a bicycle, after she emerged from an unauthorized crosswalk. According to the NTSB report, Uber’s software misidentified the woman as an unknown object, then a vehicle, then finally as a bicycle, updating its projections each time. In a California crash, a Model X steered toward a barrier and sped up in the moments before impact, for reasons that remain unclear.

Each accident seems like an edge case, the kind of thing engineers couldn’t be expected to predict in advance. But nearly every car accident involves some sort of unforeseen circumstance, and without the power to generalize, self-driving cars will have to confront each of these scenarios as if for the first time. The result would be a string of fluke-y accidents that don’t get less common or less dangerous as time goes on. For skeptics, a turn through the manual disengagement reports shows that scenario already well under way, with progress already reaching a plateau.

Andrew Ng — a former Baidu executive, Drive.AI board member, and one of the industry’s most prominent boosters — argues the problem is less about building a perfect driving system than training bystanders to anticipate self-driving behavior. In other words, we can make roads safe for the cars instead of the other way around. As an example of an unpredictable case, I asked him whether he thought modern systems could handle a pedestrian on a pogo stick, even if they had never seen one before. “I think many AV teams could handle a pogo stick user in pedestrian crosswalk,” Ng told me. “Having said that, bouncing on a pogo stick in the middle of a highway would be really dangerous.”

“Rather than building AI to solve the pogo stick problem, we should partner with the government to ask people to be lawful and considerate,” he said. “Safety isn’t just about the quality of the AI technology.”

“This is not an easily isolated problem”

Deep learning isn’t the only AI technique, and companies are already exploring alternatives. Though techniques are closely guarded within the industry (just look at Waymo’s recent lawsuit against Uber), many companies have shifted to rule-based AI, an older technique that lets engineers hard-code specific behaviors or logic into an otherwise self-directed system. It doesn’t have the same capacity to write its own behaviors just by studying data, which is what makes deep learning so exciting, but it would let companies avoid some of the deep learning’s limitations. But with the basic tasks of perception still profoundly shaped by deep learning techniques, it’s hard to say how successfully engineers can quarantine potential errors.

Ann Miura-Ko, a venture capitalist who sits on the board of Lyft, says she thinks part of the problem is high expectations for autonomous cars themselves, classifying anything less than full autonomy as a failure. “To expect them to go from zero to level five is a mismatch in expectations more than a failure of technology,” Miura-Ko says. “I see all these micro-improvements as extraordinary features on the journey towards full autonomy.”

Still, it’s not clear how long self-driving cars can stay in their current limbo. Semi-autonomous products like Tesla’s Autopilot are smart enough to handle most situations, but require human intervention if anything too unpredictable happens. When something does go wrong, it’s hard to know whether the car or the driver is to blame. For some critics, that hybrid is arguably less safe than a human driver, even if the errors are hard to blame entirely on the machine. One study by the Rand Corporation estimated that self-driving cars would have to drive 275 million miles without a fatality to prove they were as safe as human drivers. The first death linked to Tesla’s Autopilot came roughly 130 million miles into the project, well short of the mark.

But with deep learning sitting at the heart of how cars perceive objects and decide to respond, improving the accident rate may be harder than it looks. “This is not an easily isolated problem,” says Duke professor Mary Cummings, pointing to an Uber crash that killed a pedestrian earlier this year. “The perception-decision cycle is often linked, as in the case of the pedestrian death. A decision was made to do nothing based on ambiguity in perception, and the emergency braking was turned off because it got too many false alarms from the sensor”

That crash ended with Uber pausing its self-driving efforts for the summer, an ominous sign for other companies planning rollouts. Across the industry, companies are racing for more data to solve the problem, assuming the company with the most miles will build the strongest system. But where companies see a data problem, Marcus sees something much harder to solve. “They’re just using the techniques that they have in the hopes that it will work,” Marcus says. “They’re leaning on the big data because that’s the crutch that they have, but there’s no proof that ever gets you to the level of precision that we need.”

Correction: This piece originally described Andrew Ng as a founder of Drive.AI. In fact, he sits on the company’s board. The Verge regrets the error.

Today’s Storystream

Feed refreshed Two hours ago Striking out

A
Youtube
Andrew WebsterTwo hours ago
Look at this Thing.

At its Tudum event today, Netflix showed off a new clip from the Tim Burton series Wednesday, which focused on a very important character: the sentient hand known as Thing. The full series starts streaming on November 23rd.


A
The Verge
Andrew Webster4:28 PM UTC
Get ready for some Netflix news.

At 1PM ET today Netflix is streaming its second annual Tudum event, where you can expect to hear news about and see trailers from its biggest franchises, including The Witcher and Bridgerton. I’ll be covering the event live alongside my colleague Charles Pulliam-Moore, and you can also watch along at the link below. There will be lots of expected names during the stream, but I have my fingers crossed for a new season of Hemlock Grove.


J
Twitter
Jay PetersSep 23
Twitch’s creators SVP is leaving the company.

Constance Knight, Twitch’s senior vice president of global creators, is leaving for a new opportunity, according to Bloomberg’s Cecilia D’Anastasio. Knight shared her departure with staff on the same day Twitch announced impending cuts to how much its biggest streamers will earn from subscriptions.


T
Twitter
Tom WarrenSep 23
Has the Windows 11 2022 Update made your gaming PC stutter?

Nvidia GPU owners have been complaining of stuttering and poor frame rates with the latest Windows 11 update, but thankfully there’s a fix. Nvidia has identified an issue with its GeForce Experience overlay and the Windows 11 2022 Update (22H2). A fix is available in beta from Nvidia’s website.


A
External Link
If you’re using crash detection on the iPhone 14, invest in a really good phone mount.

Motorcycle owner Douglas Sonders has a cautionary tale in Jalopnik today about the iPhone 14’s new crash detection feature. He was riding his LiveWire One motorcycle down the West Side Highway at about 60 mph when he hit a bump, causing his iPhone 14 Pro Max to fly off its handlebar mount. Soon after, his girlfriend and parents received text messages that he had been in a horrible accident, causing several hours of panic. The phone even called the police, all because it fell off the handlebars. All thanks to crash detection.

Riding a motorcycle is very dangerous, and the last thing anyone needs is to think their loved one was in a horrible crash when they weren’t. This is obviously an edge case, but it makes me wonder what other sort of false positives we see as more phones adopt this technology.


A
External Link
Ford is running out of its own Blue Oval badges.

Running out of semiconductors is one thing, but running out of your own iconic nameplates is just downright brutal. The Wall Street Journal reports badge and nameplate shortages are impacting the automaker's popular F-series pickup lineup, delaying deliveries and causing general chaos.

Some executives are even proposing a 3D printing workaround, but they didn’t feel like the substitutes would clear the bar. All in all, it's been a dreadful summer of supply chain setbacks for Ford, leading the company to reorganize its org chart to bring some sort of relief.


E
TikTok
Spain’s Transports Urbans de Sabadell has La Bussí.

Once again, the US has fallen behind in transportation — call it the Bussí gap. A hole in our infrastructure, if you will.


J
External Link
Jay PetersSep 23
Doing more with less (extravagant holiday parties).

Sundar Pichai addressed employees’ questions about Google’s spending changes at an all-hands this week, according to CNBC.

“Maybe you were planning on hiring six more people but maybe you are going to have to do with four and how are you going to make that happen?” Pichai sent a memo to workers in July about a hiring slowdown.

In the all-hands, Google’s head of finance also asked staff to try not to go “over the top” for holiday parties.


E
External Link
Insiders made the most money off of Helium’s “People’s Network.”

Remember Helium, which was touted by The New York Times in an article entitled “Maybe There’s a Use for Crypto After All?” Not only was the company misleading people about who used it — Salesforce and Lime weren’t using it, despite what Helium said on its site — Helium disproportionately enriched insiders, Forbes reports.