Skip to main content

Elon Musk says we need to regulate AI before it becomes a danger to humanity

Elon Musk says we need to regulate AI before it becomes a danger to humanity

Share this story

Tesla Debuts Its New Crossover SUV Model, Tesla X
Photo by Justin Sullivan/Getty Images

Elon Musk’s thoughts on artificial intelligence are pretty well known at this point. He famously compared work on AI to “summoning the demon,” and has warned time and time again that the technology poses an existential risk to humanity. At a gathering of US governors this weekend, he repeated these sentiments, but also stressed something he says is even more important: that governments need to start regulating AI now.

“I have exposure to the very cutting edge AI, and I think people should be really concerned about it,” Musk told attendees at the National Governors Association summer meeting on Saturday. “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.”

AI represents a “fundamental risk to the existence of civilization”

The solution, says Musk, is regulation: “AI is a rare case where we need to be proactive about regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.” He added that what he sees as the current model of regulation, in which governments step in only after “a whole bunch of bad things happen,” is inadequate for AI because the technology represents “a fundamental risk to the existence of civilization.”

As ever, Musk is not talking about the sort of artificial intelligence that companies like Google, Uber, and Microsoft currently use, but what is known as artificial general intelligence — some conscious, super-intelligent entity, like the sort you see in sci-fi movies. Musk (and many AI researchers) believe that work on the former will eventually lead to the latter, but there are plenty of people in the science community who doubt this will ever happen, especially in any of our lifetimes.

What researchers are worried about is how current forms of narrow and “stupid” artificial intelligence can be abused. David Ha, a researcher working with Google Brain, said on Twitter in response to Musk’s comments that he was “more concerned about” machine learning being used to “mask unethical human activities,” than the threat of super-intelligent AI.

François Chollet, the creator of the deep neural net platform Keras, replied that while artificial intelligence “makes a few existing threats worse” it was unclear if it created any new ones. “Arguably the greatest threat is mass population control via message targeting and propaganda bot armies. [Machine learning is] not a requirement though,” said Chollet.

These uses of AI are far less exciting than what Musk is discussing, but unlike the threat from Skynet, they pose real and immediate problems. Algorithms created by machine learning are already being deployed in a number of questionable areas in the US, including helping to sentence criminals. And researchers warn that the Trump administration’s lack of interest in AI (and science generally) is going to mean that many aspects of this emerging field won’t get the scrutiny they deserve.

In this light, Musk’s comments are at least bringing some attention to an under-examined topic. You can watch Musk’s interview in full below, with his remarks on AI starting at around 48 minutes in: