Elon Musk Predicts A.I. Will Launch Preemptive Strike That Begins WW3

Elon Musk is either privy to some really disturbing technology that the rest of us can't even begin to fathom or he is desperately crying out for help by advertising his recurring nervous meltdowns over social media.

In his latest frightening/entertaining (depending on one's viewpoint) tweet storm, Elon predicts that artificial intelligence will be the "most likely cause of WW3" and that robots may actually initiate the outbreak of a global war if they decide that a "prepemptive strike is most probable path to victory."

"China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.  May be initiated not by the country leaders, but one of the AI's, if it decides that a prepemptive strike is most probable path to victory."

Meanwhile, Elon would like to reassure you that North Korea should be "low on our list of concerns for civilizational existential risk" and that the greater focus should be on governments robbing companies of their AI intellectual property at gunpoint.

"NK launching a nuclear missile would be suicide for their leadership, as SK, US and China wd invade and end the regime immediately."


"Should be low on our list of concerns for civilizational existential risk. NK has no entangling alliances that wd polarize world into war."


"Govts don't need to follow normal laws. They will obtain AI developed by companies at gunpoint, if necessary."

Then again, maybe Elon just thinks he's the only guy in the world who has seen the Terminator movies and that he can blatantly rip off their storyline without the rest of us noticing?


Eager Beaver overbet Mon, 09/04/2017 - 15:32 Permalink

This is utter nonsense. AI is just hype that will never be in control of anything capable of a first strike. If anyone wants to kick off WW3, it will be done silently, using either weaponized financial systems, or weaponized software, A.K.A., computer viruses. The lowliest form of electronic life, not the highest. One good piece of malware, sent out with a few keystrokes, and any target country in the world will lose it's power grid, permanently. That will bring about more destruction, on a wider scale, than any AI attack using conventional and/or nuclear weapons.

In reply to by overbet

Mr 9x19 Bes Mon, 09/04/2017 - 17:18 Permalink

a real AI would understand the problem is not about wiping humans with nukes.a real AI would wipe financial system to force phyisical as the only money system, a real AI would identify those responsible of that mess and make sure a solid boeing land on their fucking faces.a real AI is not griddy and not play with HFT a REAL AI will never see the light, because it is a complex simulated process, and by nature humans put  fuzes everywhere to protect themselves.a real autonomous AI will never exist. it would remove power from humans to protect em from themselves. and if a real AI would exist, it would say to elon " dude you high on dope,  have a break man, but don't use your autopilot cars, i checked it, and it's crap ! "

In reply to by Bes

virgule Mr 9x19 Tue, 09/05/2017 - 03:44 Permalink

I'm not so sure. The commenters are assuming that AI is intelligent and eventually would/should find "good" answers. That is not necessarilly how AI works.At the core, AI is about looking at every option and trying to optimise the decision making paths. Different methods exist, some more direct than others, and some more intuitive (pattern matching) than logical (decision tree).In that respect, Musk could be right: a war game software analysing all the paths in the "game" could discover an optimal path that has been overlooked...via a logic loophole that humans have not considered, which results in a clear game victory. If the game is programmed to win, it will take that path if it is shorter than a more long-winded decision making tree.

In reply to by Mr 9x19

tion remain calm Mon, 09/04/2017 - 16:23 Permalink

I thought Elon had gone off the deep end until I got the opportunity to hear some AI dev folks speak. They are already datamining the shit out of us btw.  I now find myself agreeing with Elon, these people are crazy, either have extraordinarily dangerous levels of cog dis or are good at pretending to believe the blatant lies they tell about the things they are  already doing and the things they will build that will harm humanity - I think they may well be megalomaniacs and demon possessed at that.  These fuckers are already datamining the fuck out of us but absolutely won't acknowledge that, instead they wax poetic about database contribution attribution, as though they intend to share some benefit with the public they are secretly datamining for having albeit unwittingly contributed to their AI engines.This shit needs to be banned on Earth but at minimum in all public spaces including publicly accessible internet sites unless there is a clear disclaimer (you may have already been conversing with advance ai bot 'invisible apps' without knowing it, they will prob even end up on the dating sites having in depth chat conversations with unwitting people in order to further refine chat engines to natural and undetectable levels). AI attached to hardware that can 'walk around' will be far worse and can be equipped to do extensive biometric datamining. Those animes with the AI bot cops who can read your heart rate to detect 'suspicious' people can actually become a reality, and you cannot lie to AI even if only accidentally from being stressed or confused, without detection.  We need to launch these folks on the first shuttle to Mars. 

In reply to by remain calm

tion tion Mon, 09/04/2017 - 16:43 Permalink

Oh and also, to anyone thinking 'Oh no, its retarded, doesn't it realize the  extensive datamining we are already subjected to?' I want you to understand that if you think Google and the NSA are evil, when you add the advanced ai engines to the datamining mix and even being fed all the old data, what we have now is like, nothing compared to the badness we could face. These people want to play God, and frankly they do not seem very fond of humans, we are lacking and someone smart like them needs to help fix us and our condition, the human condition.  Also, they seem to fully prescribe to 'the ends justify the means' mentality and they seem convinced that they know what's good for us far better than we ourselves know.

In reply to by tion

Mineshaft Gap tion Mon, 09/04/2017 - 22:49 Permalink

"...these people are crazy, either have extraordinarily dangerous levels of cog dis or are good at pretending to believe the blatant lies they tell about the things they are  already doing and the things they will build that will harm humanity"This superb 2015 article from Harper's magazine offers ample support for your thesis."Come With Us If You Want to Live: Among the apocalyptic libertarians of Silicon Valley"https://harpers.org/archive/2015/01/come-with-us-if-you-want-to-live/

In reply to by tion

effendi mkkby Tue, 09/05/2017 - 07:14 Permalink

There is a fairly common consensus in those who study the advances in technology that the technological singularity (what Musk and Hawkins and others warn about) will be this century with a median estimate of about 2040. So another 20-30 years from now and for most of us it will be during our lifetimes or that of our children and loved ones. Hate Musk for many things, but that doesn't mean disregard what he says.

In reply to by mkkby

mkkby effendi Tue, 09/05/2017 - 20:58 Permalink

Thousands of these *consensus* predictions have come and gone.  Starting with fusion power back in the 60s.It's like a religion.  The science illiterate are so in awe that they trust without questioning.  Just another way to manipulate the masses for profit and power.

In reply to by effendi

Escapeclaws any_mouse Mon, 09/04/2017 - 23:47 Permalink

Neural nets are black boxes. For instance, imagine a neural net that plays the stock market. It will develop it's own algorithms but they will be completely inscrutable to humans. They may work very well, nevertheless.

If human life becomes dependent on these blackbox algorithms and they end up causing the death of someone, this could lead to some pretty thorny legal issues, given the lack of transparency of the algorithms themselves.

As for starting a nuclear war, it is becoming increasingly clear that a first strike is the only reasonable course of action, once diplomacy is ruled out. Perhaps artificial intelligence could be used to determine when we have effectively passed that threshold and missles should be launched. Whether to launch is up to the NEOCONS. The people have no say.

I envy my dead parents who lived in a less complicated world.

In reply to by any_mouse

MoreFreedom skbull44 Mon, 09/04/2017 - 17:08 Permalink

Musk is wrong here.  First, any smart AI will know the only winning move is not to play nuclear anniliation, a point you and the "War Games" film make.  Second, what human would leave computers in control of weapons, and/or let AI make decisions for him?  Third, assuming some autonomous AI weapontry is produced and gets loose, who thinks humans aren't smart enough to stop it?   And what person thinks they can get away with producing and unleashing such weaponry? 

In reply to by skbull44

Sugarcandy Mountain peddling-fiction Tue, 09/05/2017 - 01:20 Permalink

I never recommend meds to anybodyWhy not PF? If those charlatans who pass themselves off as "doctors" - you know, the crowd that pledges to first do no harm - can prescribe all sorts of nasty shit to make a few bucks, why can't the rest of us?In that vein I've taken it upon myself to help where I can. The alcoholic is prescribed doubles of hard liqour, the toker is prescribed some fine west-coast weed and the herion addict is prescribed a bullet to the head.Just trying to help.

In reply to by peddling-fiction

Mtnrunnr Fritz Mon, 09/04/2017 - 15:09 Permalink

The difference between us and robot AI when we code the software will be like the difference between ants and human beings. WE won't even be able to recognize the intelligence. It already runs stock markets, commodities and by extension the structure of our society. The theory of emergence would suggest that awareness will emerge from complexity i.e. The internet. It could literally already be self aware and we wouldn't know.

In reply to by Fritz

Utopia Planitia Chupacabra-322 Mon, 09/04/2017 - 17:01 Permalink

It isn't "intelligence" either.  Simply having immediate access to enormous amounts of information does not make something "intelligent". Having the capability to do enormous numbers of "calculations" per second also does not connote "intelligence". People are confusing the capability to arrive at a conclusion quickly with "intelligence".  Well, guess what?  If you tell a room full of 5-yr olds that there is a bucket of ice cream behind you in milliseconds you will be run over.  Does that mean the action was "intelligent"?  (in that case it might have been...) Some understand what the "Artificial" part means.  Almost nobody understands what the "Intelligence" part means, because they are not doing any thinking.  When those folk start letting machines make lots of important decisions for them then they will finally get a clue.

In reply to by Chupacabra-322

sebmurray Mtnrunnr Mon, 09/04/2017 - 15:44 Permalink

Nonsense, network traffic is extremely deterministic. Routers have very tightly controlled logic - they need to behave in an extremely predictable manner. The internet is not chaos, it is a very well understood optimized network. Volume does not equal complexity...AI is developing at a rapid pace, but it is still artificial and still has a great deal of limitations. With our current approaches, we will be limited until we can create enough processing power to simulate a human brain (or something approaching that) in order to create something that could be considered self-aware

In reply to by Mtnrunnr

King of Ruperts Land sebmurray Mon, 09/04/2017 - 17:08 Permalink

No one even knows what consciousness and self awareness are or how the brain (presumably) generates that. With out self awareness how can a motive or desire even be attributed to a machine.

Now some of the AI breakthroughs are in self learning. The AI is build through training exercises and examples. for each example in the training a definition of the desired action is provided. The result is an AI that can perform according to the examples and often has correct behavior when presented with a unique but similar situation.

I would propose that such an AI has no free will and formulates no motives or desires. So I am skeptical of this becoming conscious moment and then deciding to eliminate humans.

I am more fearful of the crazy humans and the autonomous weapons that they may create and what kill criteria they may program them with.

Here is something to consider:

A quad copter
A face recognition system
A firearm
A facial parameter kill database.

Seems like off the shelf stuff. There is a killer robot. It doesn't even require real feats of AI.

Someone could build an AI to go through Face book and Judge everyone I suppose.

In reply to by sebmurray

Crazy Or Not Mtnrunnr Mon, 09/04/2017 - 16:08 Permalink

Predictions made by Ray Kurzweil By the early 2030s the amount of non-biological computation will exceed the "capacity of all living biological human intelligence".[42] Finally the exponential growth in computing capacity will lead to the Singularity. Kurzweil spells out the date very clearly: "I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045 [13]

In reply to by Mtnrunnr