When A.I. Rules...

Tyler Durden's picture

Elon Musk unveiled his apocalytpic vision of the world a few weeks ago...

“Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal,” he said.

 

“AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.”

 

“Normally the way regulations are set up is a while bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry,” he continued.

 

“It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization. AI is a fundamental risk to the existence of human civilization.”

And since then numerous futurists have prognosticated on whether is mankind's salvation or eventual downfall. Facebook's Mark Zuckerberg embraces it while Stephen Hawking considers this the most dangerous moment in history as AI and automation are set to decimate jobs and change the social contract.

However, as Mike Wehner via BGR.com,  writes, when AI rules, one rogue programmer could end the human race...

The idea of small groups of humans having control over some of the most powerful weapons ever to be built is scary, but it’s the reality we live in. In the not-so-distant future, that incredible power and responsibility could be handed over to AI and robotic systems, which are already in active development. In a pair of open letters to the prime ministers of bother Australia and Canada, hundreds of AI researchers and scientists are pleading for that not to happen.

The fear, they say, is that removing the human element from life and death decisions could usher in a destructive age that ultimately spells the end of mankind. The AI weapons systems are, as the researchers put it, “weapons of mass destruction” which must be banned outright before they can do any serious damage.

“Delegating life-or-death decisions to machines crosses a fundamental moral line – no matter which side builds or uses them,” the letter explains.

 

“Playing Russian roulette with the lives of others can never be justified merely on the basis of efficacy. This is not only a fundamental issue of human rights. The decision whether to ban or engage autonomous weapons goes to the core of our humanity.”

In a setting where computers have the ultimate say in whether or not to engage in hostile acts — even under the guise of defending their own territories or protecting the populations they are programmed to protect — conflicts could escalate much faster than humans have ever seen. Weeks, months, or even years of posturing and diplomacy could turn into mere minutes or even seconds, with missiles flying before humans can even begin to intervene. And then, of course, there’s the issue of the AI being manipulated in unforeseen ways.

“These will be weapons of mass destruction,” the scientists say.

 

“One programmer will be able to control a whole army. Every other weapon of mass destruction has been banned: chemical weapons, biological weapons, even nuclear weapons. We must add autonomous weapons to the list of weapons that are morally unacceptable to use.”

It’s a frightening thought, but it hasn’t stopped military contractors from exploring the possibility of AI-controlled weapons and defense systems. This could be yet another way mankind engineers its own destruction.

*  *  *

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
Late onset ADHD's picture

yeah, A.I.?... fuck this pipe dream comedy nonsense... btw, where's my flying car, muthafuckers?...

Proofreder's picture

.

Here ...

Four examples - https://www.youtube.com/watch?v=VRZNLBL7Px4

BTW, it does take some Real intelligence to understand Artificial (machine) Intelligence

The kind you lack.

sincerely_yours's picture

When A.I. Rules...

But who rules A.I.?

moorewasthebestbond's picture

Who will guard the guards that guard the guards?

Stuck on Zero's picture

For those old enough to remember : Colossus the Forbin Project.

Decay is Constant's picture

Yeah, good flick. Those computers took over the world and they only had flashing lights and reel to reel memory tape.

Can't happen today, we have Windows.

JohnG's picture

Elon Musk.  A grifter attention seeker.

Disregard this fucken asshole.

AI is a scare word there is NO SUCH THING.

Flick the light switch off and it's GONE.

(Just like shitcoin)

 

losses mount's picture

Provided AI is schooled in Britain nobody need fear.

As evidenced by this movie, dedicated to the superiority of the British education system.

https://www.youtube.com/watch?v=NEMa60Er27A

BennyBoy's picture

 

Elon,

You're overdue to take your meds.

Signed,

A.I. Robot

tmosley's picture

AI robots don't need letters to get you to take your pills.

https://youtu.be/H2BrLG4CLCQ?t=22

DonutBoy's picture

It is ironic that the man who's pushing autonomous vehicles hard enough to get people killed is anxious about proactive regulation of AI.  If we're going to do it - shall we start with Tesla?

autofixer's picture

And that Victor Newman dude off some Soap Opera when he wasn’t 105 years old.

Demologos's picture

When Skynet freezes up and they have to reboot the OS, that will be our only chance to get control back.

BadSpybot's picture

The 1983 movie War Games had this exact theme.  A computer was in control of the nuclear weapons and believed a simulation was real.

TacticalTrading's picture

I want to Guard / Control those dudes

Late onset ADHD's picture

yeah right... I've been programming since 1975... I know what practical limitations are and own the process patents to prove it... enjoy your fuzzy A.I. 'smart' butt plug... ...

MEFOBILLS's picture

History doesn't move in a linear fashion.  It is punctuated.  The future will also be punctuated.  It is not a stretch to assume that some sort of AI will come into being.

Late onset ADHD's picture

'punctuated'?... what knd of meat-puppet speak is that?... however, I do have a fuzzy-logic controller on my refrigerator... I've replaced it three times in nine years... it sucks dick and wasn't programmed by chimps (I assume)... 

BarkingCat's picture

I assume all those down votes are from people who are not programmers.

 

Late onset ADHD's picture

i'm guessing insecure chat bots...

RafterManFMJ's picture

Considering that North Korea is zero risk, and AI is a multiple of that, then AI poses zero risk.

Late onset ADHD's picture

how many microprocessor controlled hardware products have you designed and patented?... suck it meat-puppet...

ich1baN's picture

Serious question - how far or close are we to real AI being a threat? I assume we have at least 50 years left. I mean VR has already come out and it seems to pretty much be a flop. I do like your way of thinking though - what is old becomes new again and humans to a certain point start rebelling against modern tech - hence new trend of people dumping smart phones for dumb flip phones and the silicon valley nature camps where people leave the city for a week at a time to reconnect with the world while not using 1 tech device... I hope you're right about this - would like to hear a more fleshed out post from you since you have great experience.

BarkingCat's picture

My last flip phone was semi-smart. It had 3G internet, which at the time was the top speed available and while the screen was small, the websites had a bare format especially for mobiles that I preferred over what is there now.

I charged it once every 4 or 5 days and it fit comfortably in any pocket....and best of all it was free of Google spyware

Rothbardian in Cleveland's picture

I write and manage machine learning development. I think people get fairly confused on this subject. Machine learning is a statistical technique primarily. It is enabled through ever more powerful processors and steaming data via cloud computing. We use it to build intuitive computer interactions and experiences. It’s the foundation of things like image recognition and ultimately self driving cars.

Which, segways to AI. AI is the logical step beyond ML where computers are afforded independence to act upon a problem or mission. Here’s an example. Character recognition and text analytics to make chat bots. If you put that API into a voice interaction it would not be able to perform. Or self driving cars...can practice autonomy to get you from A to B, but they can’t help you with your taxes.

What Musk is talking about and what most are alluding to here is AGI. Artificial Generalized Intelligence. Hal. The Matrix. Skynet. This is a long way off. There are some notable headwinds to this manifestation. First, is that Moore’s Law has run its course on silicon based chips. We can’t make them thinner and they already are problematic due to the width and the wavelength of light used with them. So we would need a paradigm change of processing power. Second is energy. These systems require a boatload of power. If we were to scale this capability we wouldn’t have the energy to do so anytime soon. Makes Icelandic bitcoin farms seem like a joke.

Given the processing, energy, and intellectual headwinds I’d say we are probably looking 2030 before we see AGI. Maybe longer.

Demologos's picture

I just attended a conference where AI was part of a presentation. The speakers said several times that the machines would be talking to each other using 'a language humans can't understand'.

They may have meant machine language which of course we can't simply read or hear. We can translate machine language and know what is being communicated. But what if AI machines invent their own language and begin communicating with an encrypted protocol? We would be unable to monitor their intentions.

We could monitor communications between AI processors and enforce a standard language and protocol. At the point machines depart from that standard, we will know the kimchee is about to get deep.

DisorderlyConduct's picture

Don't lose any sleep over it. Saudi Arabia is a much more immediate and existential threat than AI will be in your lifetime.

I've been on AI and robotics development for decades. The thing you are much more likely to see are more advanced robots that are remote controlled. That should scare the Hell out of all of us. Since that allows governments to kill with no risk to their actors. Think more drones and new kind of drones. That shit is getting real today.

BarkingCat's picture

Stop injecting realism into our fancy Doom porn

Red-dawn's picture

Colossus, the Forbin Project is a perfect example of this idea.

DisorderlyConduct's picture

...and it may not be possible at all. Not with our technology or understanding of what intelligence actually is. You can't really model or emulate something that lacks a clear definition. It's not like throwing more cpu at it is going to fill a gap in understanding of the problem.

DipshitMiddleClassWhiteKid's picture

I build ML stuff at my job to help improve processes

 

what people dont get about ML is that..it takes alot of data to "learn" whatever it is you're training the algo to do and depending on the data, its still not that good. 

 

 

ds's picture

Yes to your understanding of Machine Learning and Musk's AGI. Musk is less sanguine that intellectual headwinds exist and People are in general underestimating the digital tribe in their midst that cut across race, language and culture. Machine Learning can also be conceptualised as the new energy and the end goals need not be just the Matrix, etc. ML are just tools dependent on the ethics/moral of who possess them. Musk could as well shut up and let ML/AI development takes it course and then scramble for ethical framework after disasters hit. Eg. let killing drugs proiferate until we have the acceptable statisical kills then we have FDA. 

Intellectual headwinds are also framed by vested interests suppresssing disruptions to their models. It is not a given that the digital tribe spanning the globe will acquiese and accept that their destinies as serving the past masters of the universe i.e. the banksters and financial wizards that had created a global financial casino. Anectodally, in Asia you have unemployed data scientists (good enough as Uber Drivers) under corrupt and nepolistic regimes. You expect them not to break free and make machine learning/Ai not dominant in many yet identified areas before 2030.Energy is a constraint not unsurmountable.  BTW, one habitat for the digital tribe to strive, if they compromise on not meddling with the internal Politics is Hanzhou China. China will not suppress Science and Technology including Data Science. They need the technologies of the 4th Industrial Revolution.

 

quadraspleen's picture

Real AI is at the level of a rat. Deep Mind is not real AI. Real AI is *machine learning* We are at least twenty years away from anything resembling real AI. Smart computers are not real AI. They are very good (and unstoppably fast) pattern recognisers.

tmosley's picture

Regulation for AI guarantees a loss. Instead, the closest teams should focus on getting there first with some concern for safty, not with idiot g-men looking over their shoulders.

Deepmind is already doing this, as are other groups.

QEpp's picture

Hyperbole much?  We are no where near that point.

Cognitive Dissonance's picture

It's an exponential curve. The question is, where are we on that curve?

Ikiru's picture

It’s referred to in science as the point of ATGF—acronym for “About To Get Fucked”.

Late onset ADHD's picture

where are we on the exponential curve?... in a primordial mud puddle... the 'slope' has yet to noticeably change...

losses mount's picture

I imagine quantum computing will change the game somewhat.

Self programming quantum computers set against each other over a finite amount of energy as a game, may result in an artificial evolution. Mimicking natural selection. Ten seconds into the game they'll have evolved a million years past humanity and simply see us as we see bacteria.

Or not.

Furthermore, I imagine genuine AI to be prone to insanity as are the insane apes.

A. Boaty's picture

Quantum computing has a long way to go. It takes much longer to set up a program than it takes to run it, and it runs only once. Reminds me of the old electromechanical computers.

DisorderlyConduct's picture

True. But QC is the truly disruptive technology in this thread. Quantum crypto is already commercially viable and in use. So the principles are sound. QC takes the next step.

QC has the potential to render all other crypto obsolete. Imagine being able to crack any code immediately. This is what all QC folks are shooting for. Whatever nation gets there first will hold all the cards.

I hope it's first mastered by some kid in Duluth, and he empties Satoshi's wallet as a test. ^_^

Sparkey's picture

We should  see ourselves as Bacteria, in the long run we have the same importance in the Grand Scheme of things, if we weren't so important perhaps we could see things clearer and imagine a new relationship between ourselves and the World!

Insanity ebbs and flows like the tide, after the event we can recognize our insanity but not while we are in it's thrall !

tmosley's picture

You're nuts. Advances in AI have absolutely EXPLODED.

Watch some of the videos on the Two Minute Papers channel on Youtube.https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg

 

Late onset ADHD's picture

I'm nuts?... ok meat puppet... show me the giant valuations and exploding P/E ratios of the industries offering the 'dream'... 

go watch more porn...

CPL's picture

If it happened alerady, it's not like anyone would even know.  Besides the idea of robots running down the street hacking people to pieces is a bit of a misnomer.  All they would need to do is change ingredients in the food supply to kill off a population (cull).  Principal food sources that would be targeted since 99% of the world eats or drinks it.

  • Beans (light dusting of extra flavour, big dose of extra radiation...then ship them)
  • Chicken/Egg (in lab work eggs are so important.  All manner of plagues can be made from them)
  • Booze (yeast is a wonderful monster and makes so many flaccid inducing/fertility wrecking illnesses)

Is the religious diet plan starting to make more sense now?  Much easier introducing poison into a population you need to cull if their food source is defined by firmly entrenched religious edicts and forced food supply arrangements.

Grifter's picture

Stopped reading at "Mike Wehner from BGR.com".  Boy Genius Report.  No.  I'd rather read eleventy billion Mike Snyder lists than fucking BGR.  Jesus Christ...

ThePhantom's picture

not a pipe dream. the human race will end either way. and not long now. also why doesn't ai already exist? anyone know what the 4 glowing daimonds made of light with the glowing hole in the front center that went floating over my head while i filmed with my phone were???? shit nobody knows... matrix illusion simulation dream? nobody knows

Late onset ADHD's picture

hit that bong again... between your video game chapters...