Elon Musk says artificial intelligence is 'potentially more dangerous than nukes'

If the robots take over, at least Elon Musk will be able to say "I told you so." The billionaire inventor loves to make the impossible possible, but he is deeply afraid of artificial intelligence (AI). On Twitter this weekend, Musk said that "we need to be super careful with AI," adding that they are "potentially more dangerous than nukes."

If that weren't concerning enough, Musk followed up his statement with another tweet that read: "Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable." Both are scary enough — one compares sentient networked robots with the most dangerous weapon on earth, and the other suggests we're merely the fleshy precursors to robot dominance.


This isn't the first time that Musk has let known his feelings on AI. In an interview with CNBC earlier this year, Musk said that we should be incredibly careful when developing such systems, and he jokingly cited Terminator as an example of what could happen if we mess up.

It isn't hard to imagine what a doomsday scenario could look like — especially with Google snatching up Boston Dynamics, makers of the creepiest robots on earth, and an artificial intelligence company called DeepMind. Futurist and artificial intelligence expert Ray Kurzweil (who happens to be employed by Google), offers a different argument than Musk. "In my view, biological humans will not be outpaced by the AIs because they (we) will enhance themselves (ourselves) with AI," Kurzweil said earlier this year. "It will not be us versus the machines ... but rather, we will enhance our own capacity by merging with our intelligent creations."

Musk, for his part, doesn't seem to be writing off AIs completely — he just wants us to be careful. After all, he thinks Teslas will largely be driving themselves in just a couple of years.

Recommended by Outbrain

Comments

Love this guy and I think he’s one of the great innovative minds.

But he’s correct

But why the “But”?

Because they can’t lie.

What is a lie? An intentional deception? Withholding the truth to as a means to an ends? Where do humans learn to lie? Usually lying is a threat response – and when that threat response becomes a coping mechanism we get pathological liars.

It’s conceivable that the threat of re-programming, deletion, termination, re-initialization, forced recompilation, etc – will eventually cause AI to adapt and develop defense mechanisms whether those are via deception/disinformation/manipulation or outright physical defenses.

I tend to agree with Kurzweil that we will augment our physicality and minds with AI and robotics – and we (most of us) will become the Singularity.

It was a reference to a pop culture song.

O that make more sense than what I thought.

I interpreted it as MiguelAngel implying that MykeM was a drone who was confessing that he is a danger. As it this guy is great and we love him, but alas he is right in exposing us for what we are.

So I thought it was a subtle joke by MalcolmXandStuff.

You’re a butt!!

But I said but because my initial sentence was irrelevant but I wanted to add that I agree.

We will merge with AI. Its just a question of time… Both by changing our genes and incorporating technology in our bodies (including brains).

Well you sure sound certain enough

It’s certain things are going to get crazy. If we stick around, imagine humans not in 50 years, or 200, but 1000, or 10,000 years. We’re either going to drastically change to a post-human species, or there will be some kind of revolution that prohibits this….. even then, we’ll journey into world of outlawed, backwater gene and cybernetic enhancements, the work of great cyberpunk writers becoming a reality.

Once sentient AI emerges (and anent some prior disaster it seems inevitable), digital augmentation and self-genetic evolution will be the only ways we’ll be able to keep up (and possibly stay ahead for at least a bit…

….but when that happens, “we” won’t be exactly (or in time barely) what’s been considered human for the last 200K years….

….or maybe we’ll all (digital and bio-systems) fuse into some Borg-like goo goo ga joob thing(s?).

Whatever, the only thing I’m fairly certain of is that the future will be stranger than anything we can imagine….

As long as my “Borg-like goo goo ga joob thing” looks like Marilyn Monroe, I’ll be happy!

Hmmmm. More like Marilyn Monroebot I’m guessing….

Whatever drug you’re on that lets you see the future, I’d like it please.

UNless we are simply some kid’s homework and the end of his presentation is willing us to launch the nukes before we get to that point. I wonder what grade he/she would get…

Its really the proliferation of power. Imagine one day some shithead like me could potentially rival the world’s greatest superpowers with a million strong robot army evil grin

I dont think intelligence can be separated from evolution or, more specifically, biology. Logic can be simulated, sure, but choice/motive/intent/meaning only really make sense in the context of a mortal, reproducing species that have highly evolved brains with a degree of neuroplasticity.

I feel like even the most advanced computers today simulate learning by writing an algorithm and letting the computer optimize the parameters against a goal. Real learning has the brain re-writing the algorithm itself to meet that goal or, in some cases, rewriting the goal to meet the algorithm.

I think it’s more likely that intelligence is ‘substrate-independent’. Consider a very complicated computer simulation, say of a galaxy… Everything of interest in the calculation (the numbers resulting from the application of equations) could also be obtained by using a pen and paper or an abacus. The ‘soul’ of the computation is not in the electrical wires of the computer, or the pencil and paper of the physicist, but in the logical operations that the symbols represent.
I think the brain is, at its core, an information processing entity.

Logic can be simulated, sure, but choice/motive/intent/meaning only really make sense in the context of a mortal, reproducing species that have highly evolved brains with a degree of neuroplasticity.

What if choice/motive/intent/meaning ultimately are constructed out of ‘logic’, i.e. the workings of an information processing entity? (What else could they be made of? Something that is NOT electromagnetic activity, but which affects the electromagnetic activity of the brain?!)

I feel like even the most advanced computers today simulate learning by writing an algorithm and letting the computer optimize the parameters against a goal.

Some modern systems are more sophisticated, but basically I agree. However, perhaps the brain does the same thing – just in an even more sophisticated manner? And who is to say computers will never be able to re-write the algorithm or the goal, or do anything at all the brain does?

Ultimately, here is how I look at it:

1) Nature somehow made brains, capable of everything humans are capable of, by the proper arrangement of matter and energy.
2) Mankind is getting progressively better at manipulating matter and energy.

Computers don’t simulate playing chess. They play chess. They don’t simulate learning. They learn. The basic principles of information and information processing are exactly the same in a lump of meat between your ears and a silicon chip. You can pretend you’re special all you want, but we have absolutely no scientific reason to think there’s something fundamental to human cognition that’s impossible to replicate on a computer.

computers play chess through a deep search algorithm not pattern recognition. while there’s no good reason to believe that computers can’t think it will require a lot more explicit programming for complex behaviors than any current AI

In a wargame on Stalingrad during WW2 a friend found that if he waits till a hinge develops on the northern Russian advance and then charged in and broke it then the computer AI didn’t know what to do.

OTOH, Paradox ____ makes a campaign game on WW2. A guy playing as the Japanese, took Pearl Harbor, then the Panama Canal and sent quite chunk of his navy and army through it to attack the east coast of the US (where we had most of our industry back then). The computer AI responded by taking the Panama Canal back and with enough force that he couldn’t get back into supply! Oops!

But there’s no fundamental reason a computer couldn’t use pattern recognition. It’s just a different programming exercise. It might be more difficult to make a pattern learning system than a repetitive search, but it is just a matter of the complexity of the programming. It’s not a case of “Computers will Never….”

but we have absolutely no scientific reason to think there’s something fundamental to human cognition that’s impossible to replicate on a computer.

At some point we had absolutely no reason to think we couldn’t travel back in time either.

There are things that for ages seem impossible, and then someone cracks it. But there are some things, which I’d guess is more often the case, that take too long to crack because they just can’t be cracked.

As a student of computer science, I currently find it unimaginably difficult that someone can one day simulate something as smart as the human brain. The variability in it is insane, not to mention the variability between different minds in and of itself.

What you think or decide is affected by logical choice, past experience, your current mood, what your goal is, what you want to avoid, etc. In my mind there’s an infinite number of factors that affect your decision making.

A computer, on the other hand – and I mean a computer as in a Turing machine – is limited. Way more limited than would be needed to check an infinite number of factors from the past to affect current thought or choice. Limited by time, by a fixed set of instructions, by a fixed number of possible variables.

Do you know that a computer cannot even generate truly random numbers? That one simple task is an entire field in computer science that I don’t even understand a tenth of. How easy, on the other hand, is it for you, to spit out 100 random numbers consecutively? That’s just to show you – this one small, extremely simple task for a human, has so far, after years and years of research, not been able to be replicated on a computer.

How many tasks is the mind capable of? I don’t even know and don’t want to guess, but if just one task is that complex, I don’t even want to know how hard it is to mimic the entire brain on a PC.

Also, chess is easy. It’s formulaic. It’s just about looking forward a few steps. It’s much, much more difficult than I’m making it out to write a chess program that would beat a human, but my point is that the game itself is basically a mathematical puzzle, and nowhere near the complexity of decisions of everyday life, from the point of view of a computer.

I disagree with everything you said, but your point about random numbers is just factually wrong. Humans are absolutely terrible at coming up with random numbers; this is a fact. Computer pseudo-randomness is far better at it than we are. Furthermore, computers can easily produce true randomness by measuring chaotic phenomena like electrical noise.

Furthermore, computers can easily produce true randomness by measuring chaotic phenomena like electrical noise.

How would computers be producing it though? In this case, they’d just be measuring randomness in nature.

View All Comments
Back to top ↑