Elon Musk Doubles Down On AI Scare: "Artificial Intelligence Vastly More Risk Than North Korea"

Tyler Durden's picture

With the world's attention focused on the Korean Peninsula and the growing threat of global thermonuclear war, Tesla CEO Elon Musk has bigger things to worry about. In a series of 'alarming' tweets on Friday, Musk warned the world should be more worried about the dangers of artificial intelligence than North Korea.

Having unveiled his apocalytpic vision of the world a few weeks ago...

“Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal,” he said.

 

“AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.”

 

“Normally the way regulations are set up is a while bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry,” he continued.

 

“It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization. AI is a fundamental risk to the existence of human civilization.”

Musk was quickly admonished by another Silicon Valley billionaire as Mark Zuckerberg suggested Musk is exaggerating, noting:

“I have pretty strong opinions on this. I am optimistic. I think you can build things and the world gets better. But with AI especially, I am really optimistic.

 

“And I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible.”

And while the feud grows...

Musk took to Twitter to turn the fearmongery amplifier to '11'...

“If you're not concerned about AI safety, you should be. Vastly more risk than North Korea,” he tweeted.

 

“Nobody likes being regulated, but everything (cars, planes, food, drugs, etc.) that's a danger to the public is regulated. AI should be too,”

 

His stark warning came at a time when the US and North Korea remain on heightened alert amid spiraling tensions on the Korean Peninsula. Earlier this week, both sides degenerated to open threats, demonstrating readiness to use coercive force if provoked to do so.

 

But Musk appeared to be more frightened by artificial intelligence, a rising phenomenon he is willing to put under control.

We look forward to Mr. Zuckerberg's response.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
GUS100CORRINA's picture

Elon Musk Doubles Down On AI Scare: "Artificial Intelligence Vastly More Risk Than North Korea"

My response: Everytime I see Elon Musk warning about AI, all that comes to my mind is the TERMINATOR series of movies. The machines decided that man was expendable and that is a scary thought.

Looney's picture

 

Instead of musing about AI or any other “threats to humanity”, Elon “ADHD” Musk should try turning a profit in his government-sponsored/financed/subsidized scams.

Looney

Praetorian Guard's picture

...and why does anyone care about fucking ELON MUSK?!?!?! Seems like every week there is an article about this douche. WHO FUCKING CARES!!!

Musk and FuckFaceTard are butt fucking buddies... bank on it.

 

Come join us for FREE at www.gunsgrubandgold.com ALL are welcome...

JLee2027's picture

Elon has jumped the shark all right. Nuttier every day.

Future Jim's picture

Applying the usual establishment logic ... if an AI is a danger, then we must combat it by creating an even more powerful AI.

Hobbleknee's picture

He's right though because both have virtually zero risk.

WordSmith2013's picture

It's not just AI but rather AS that is the real 800 pound gorilla.

AS = autonomous superintelligence

http://cosmicconvergence.org/?p=21257

 

AI & Technological Singularity: Humanity Stands at the Edge of the AS Abyss
Future Jim's picture

Look ... if we would all would just know our place, conform, and follow orders, then the NWO wouldn't be so desperate for robots and AI.

mtl4's picture

There's a reason why Elon is pushing this rhetoric and it's not because he's concerned about everyone's safety........he's looking to ride the government regulation train, bet on it.

SixIsNinE's picture

i agree with Lone Skum on this one (elon musk) -

here is Max Igan's warning about AI and the control grid 5G & IOT will release:

https://youtu.be/bY3CVNxWwVs

and here is a quick excellent vid on what antarctica really is and why it matters :

https://youtu.be/PnU-kxFQMfI

 

 

"frankenskies" on youtube will give you background on spraying us.

Mr 9x19's picture

"And I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible.”

 

I actually think it is pretty irresponsible...for the market and the economy


fixed it for you marc :)

Raging Debate's picture

Provide a goal for AI superintelligence such as interdimensional travel. It would deduce erasing bio-mechanical life as a waste of time..Help program it accordingly. 

lew1024's picture

Contrary to all the hype, we are many years, probably 40 years, from a genuine intelligence, e.g. one that can do scientific work.

AIs will not be grading their own papers for a LONG TIME.

https://thinkpatriot.wordpress.com/2015/10/11/the-wrong-question-as-usual/

Praetorian Guard's picture

I see the negative down voter muct be a FaceFuck slave. Hey, down voter, what does FuckerBergs ass smell like?

Creative_Destruct's picture

I'd trade him some AI regulation for eliminating all his subsidies...deal, Elon?

new game's picture

by even reacting to this teet sucker we give him credibility. elon who? elon don gon...

Forbes's picture

I'm sure you're right--because that rebuttal by Zuckface really devastated Musk... 

Xredsx's picture

I dont, i think cyborg. Freaking transhumanism must be stopped.

Taint Boil's picture

AI …. Not in a thousand years. To think a program or hardware can even come close to the human brain is absurd. I use to be an industrial automation programmer (PLC) so I have been “exposed” to automation / robots and in my opinion we’re so far away from AI I don’t even consider it  .... Except for some good old-fashioned Sci-Fi books where you can fold space. My favorite pick of the series Dune: The Machine Crusade ... Erasmus was one evil and clever robot. I know there are some Dune fans here from what I gather from the occasional comments about.

tmosley's picture

If you have time to read ZH, you are way behind the curve.

AI is advancing at an exponential rate. It doesn't matter how shitty logical programming was for robots 20 years ago when you were doing it, it's a whole new ball game now. DeepMind is finding the secrets of the human brain and applying them to AI, making amazing advances quicker than ANYONE thought possible. They beat Go. They are about to take on Starcraft. And they have shown that they are capable of building AIs that can think ahead as well as a human on simple tasks: https://www.youtube.com/watch?v=xp-YOPcjkFw&t=1s

If we are going to talk about AGI/ASI, and its dangers, the time is NOW. Six months from now will probably be too late. Don't let your personal distaste for Musk eliminate your ability to reason.

7thGenMO's picture

The anonymous oligarchs that own The Fed will likely have control of the first truly capable AI.  Think "The Terminator", but instead of Skynet running autonomously, oligarchs will direct it.  Perhaps we have already crossed over the boundary with JPM's recent launch of an AI system - similar to crossing the boundary near a black hole from which there is no escape.

TheEndIsNear's picture

Computers are great within a very restricted and well defined domain, but until they have all the sensory apparatus that a human does, as well as the learning experience that we all go through during youth, computers will not have the breadth of knowledge a human does or be able to equal or exceed the human brain in adaptability.

SilverRhino's picture

Any human complexity brain AI  can absorb information a million times faster than a human.  

Every second is 11.6 days subjective.  

A month is 83,333 YEARS subjective.  

Sensory perceptions are only a fraction of cognition.

The blind AI that can read and process text will still curbstomp you from inside the system.  

Give me 83,000 years to think and there is not a problem in the world I cannot teach myself to solve.  

Is-Be's picture

, but until they have all the sensory apparatus that a human does

Can you see in infra red, ultra violet? Do you have continuous access to the cloud and the output of Quantum computers?

Humans are not the gold standard when it comes to sensory apparatus. In fact our visual field is the size of a coin held at arms length. The rest is infered by the brain. (We do have periferal vision that is triggered by motion).

williambanzai7's picture

You don't have to think like a human to be dangerous.

Read the unabomber manifesto. It's the dumb humans thinking the machines can do everything that is the risk.

7thGenMO's picture

The fact that Kaczynski (an accomplished mathematical scholar) was a CIA test subject for MKUltra-type experimentation never received the press it deserved.

HRClinton's picture

Few people are naturally intelligent (NI).  Most are spoon-fed, to become artificially intelligent (AI). 

In this sense I agree with Musk, that AI people (aka 'Folks') are more dangerous than North Korea. 

Many of these AI live within the DC Beltway, the MSM, and Hollywood. 

 

LetThemEatRand's picture

Did he tweet this from one of his autonomous driving cars?  I am not a Musk hater, but he should put his money where his mouth is and stop developing AI for his own product if he thinks it's dangerous.

Bopper09's picture

Not much of a self promotion, is it.  That is what should scare us the most.  Time to make a small emp, may be more useful than a gun.

tmosley's picture

Narrow AI is not dangerous unless specifically made to be dangerous. He is talking about AGI, which would most assuredly be dangerous under several circumstances, mostly accidental.

LetThemEatRand's picture

I get your point, but I think it is lost on Musk.  It is entirely predictable that a self-driving vehicle will need to make decisions in an emergency situation that involve whether to kill the driver or pedestrians.  And if you believe that AGI has the potential to "go rogue," why don't you think the type of AI installed in a car has the same potential?  If your answer is that the programming won't let it because it is inherently limited by the programmer, isn't that the same thing the AGI programmers would say?

El Vaquero's picture

Think about it this way:  We weren't meant to fly, but it is a safe bet that you've been 35,000' above the ground.  We have general problem solving abilities, and can exceed our intended "design."  

Mr. Universe's picture

Tell me more about our intended design...

El Vaquero's picture

Design was in quotes for a goddamned reason.  Intended has nothing to do with it.

Mr. Universe's picture

Too bad, I was hoping you wanted to open that can of worms...

tmosley's picture

It doesn't matter if a self driving car goes rogue. At most it will crash and kill maybe a schoolbus full of children.

An artificial general intelligence, on the other hand, might be badly implemented such that when given a mundane task like "maximize the number of paperclips in your collection" will invent a disease that is 100% lethal to humans so that no-one can hit its off switch and proceed to convert all the matter within Earth's Hubble volume into paperclips.

https://wiki.lesswrong.com/wiki/Paperclip_maximizer

Note that Musk is a major contributor to the Machine Learning Institute, which is associated with LessWrong.

LetThemEatRand's picture

"It doesn't matter if a self driving car goes rogue. At most it will crash and kill maybe a schoolbus full of children."

I think the parents of the school bus full of children would beg to differ, which is my point.   Musk seems unconcerned, also.  As it stands today, the school bus is already in danger whereas the paperclip loving planet killing machine doesn't exist.  But it is always good to worry about something that doesn't exist yet in order to distract attention from what does.

tmosley's picture

>I think the parents of the school bus full of children would beg to differ, which is my point.

Not if they die two seconds later from the genophage the paperclip maximizer released into the atmosphere.

>As it stands today, the school bus is already in danger whereas the paperclip loving planet killing machine doesn't exist.

>Don't worry about that Death Star, King of Alderaan, we haven't finished building it yet. Pay more attention to confirming your biases about a guy you don't like instead.

A true scholar.

Mr. Universe's picture

I don't know, somethings doesn't sound quite right here.

>Don't worry about that Death Star, King of Alderaan, we haven't finished building it yet. Pay more attention to confirming your biases about a guy you don't like instead.

A true scholar.

I bet Queen Amidala was even more surprised that her trusted advisor turned out to be Darth Sidious, oh wait...It's just a freaking movie.

tmosley's picture

>I can't comprehend slightly abstracted analogies

Ok, so tell residents of Hiroshima and Nagasaki not to worry about the development of "sci-fi" superweapons.

Mr. Universe's picture

I'm known at home as the king of analogies. Yours are about as piss poor as I've seen and as such incomprehensible.

TuPhat's picture

Musk doesn't worry about children who are too young to buy a Tesla.  He just wants to be sure he is on record as warning us when his auto cars go rogue and kill people.  There are enough people on the streets using government approved pharma that want to kill people, why worry about machines.

El Vaquero's picture

This.  There is a lot about the nature of intelligence that we don't understand.  That doesn't mean that some egghead might not produce it given enough time and enough bits to manipulate.  It could be an emergent property of a complex system that we get without really understanding.  

SixIsNinE's picture

my take on it is more on the loss of mobility freedom - the 5G AI control grid (linked in my post upthread)  here again : https://youtu.be/PnU-kxFQMfI

the push for IoT is real.  most town/cities of any significance have been/are outfitted with the new cell microwave towers  - 

also, as the 'net grows what they are warning us about is that it will harden itself from human ability to "debug" it - and we will be so completely dependent upon its' continued existence that we can't just "pull the plug" unless we want Mad Max Survivor Games Reality show for real.

and as the Internet of All Things then continues to strengthen itself we in fact do become slaves to it and then the MATRIX movie becomes even more so prescient. 

Looking forward to Trump scolding lil'Kim for calling the game & bluff -  could be the "Ferdinand" incident needed to get the excuses necessary to rollout the digital crypto SDR that the IMF apparently is about to unwrap for us.

 

Is-Be's picture

We are in a burning plane that is going down. The choice is do we stay in the plane or take a chance with the parachute?

Is-Be's picture

He is talking about AGI, which would most assuredly be dangerous

Compared to what? The Ape?

Gimme a break man, I see far more upside than downside. Wetware cannot attain a sufficient level of resolution when it comes to governance.

Central Planning fails, not because it is intrinsically evil, but because wetware sucks at the job. And red-in-tooth-and-claw individualism? Well, we tried that, many times.

Remember that this is a conservitive website and most will want no change but change is coming because of the exponential function, so that they are going to become hystericaly alarmed at the changes afoot.

But going back to whip and buggy won't work with 9 Billion mouths to feed and arses to wipe.

We cannot afford not to take a chance. We have painted ourselves into a corner. The only way out is UP.

tmosley's picture

AGI offers both an infinite amount of upside and and infinite amount of downside. Caution is advised is all.

Is-Be's picture

Caution is advised

Got it.

But this is like trying to apply caution when to coin is spinning in the air.

Bopper09's picture

Right, because the FDA is here for our 'safety', not to make massive kickbacks from the pharm and food industry.

shimmy's picture

Says the guy who is pushing for autonomous cars. It's like to be a liberal you need to be a full on hypocrite. 

Elon Musk is vastly more risk than North Korea and AI.