Scientists Say EU's "Robot Bill Of Rights" Would Violate The Rights Of Humans

The decision by an influential EU Parliamentary Committee to approve what’s been described by critics and proponents alike as a robot "bill of rights" back in January has ignited a fierce backlash and prompted a group of dozens of AI researchers to write a scathing letter criticizing the EU's approach to regulating robots.

In the open letter, 156 robotics and AI experts from 14 countries blasted the EU for trying to enforce "nonsensical" and "non-pragmatic" regulations that ultimately could violate people's rights.

Here's more from EuroNews:

In an open letter, more than 150 experts in robotics, artificial intelligence, law, medical science and ethics, warned the Commission against approving a proposal that envisions a special legal status of “electronic persons” for the most sophisticated, autonomous robots.

“Creating a legal status of electronic ‘person’ would be ideological and nonsensical and non-pragmatic,” the letter says.

The group said the proposal, which was approved in a resolution by the European Parliament last year, is based on a perception of robots "distorted by science fiction and a few recent sensational press announcements."

“From an ethical and legal perspective, creating a legal personality for a robot is inappropriate”, they argued, explaining that doing so could breach human rights law.

Around the world, and in both the manufacturing and service economies, robotics is making swift gains as the number of industrial robots in circulation has climbed dramatically in recent years. According to projections published by Reuters IFR, their numbers will double again by 2020.

Robots

China has emerged as the unrivaled leader in the race to dominate AI and robotics, bringing to mind Russian President Vladimir Putin's prediction that whichever power dominated the AI arms race would go on the "rule the world."

Elon Musk famously warned that, if governments don't pass responsible regulations soon, the plot of the "Terminator" Series could become a reality.

Meanwhile, in Beijing, the Communist Party is building the first entirely AI-run police station.

The crux of the debate between EU lawmakers and scientists is a paragraph in an EU-commissioned report from 2017 which suggests that robots with the ability to learn should be granted "electronic personalities", allowing them (yes, the robots, not their owners or manufacturers) to be held liable for civil and legal penalties, according to Politico Europe.

The battle goes back to a paragraph of text, buried deep in a European Parliament report from early 2017, which suggests that self-learning robots could be granted “electronic personalities.” Such a status could allow robots to be insured individually and be held liable for damages if they go rogue and start hurting people or damaging property.

Those pushing for such a legal change, including some manufacturers and their affiliates, say the proposal is common sense. Legal personhood would not make robots virtual people who can get married and benefit from human rights, they say; it would merely put them on par with corporations, which already have status as “legal persons,” and are treated as such by courts around the world.

But as robots and artificial intelligence become hot-button political issues on both sides of the Atlantic, MEP and vice chair of the European Parliament’s legal affairs committee, Mady Delvaux, and other proponents of legal changes face stiffening opposition.

In the letter, the scientists protested the idea of giving robots rights, arguing that much more strict regulations are necessary to ensure that robots never gain the capability to harm humans (unless they're specifically designed for that purpose, like South Korea's "killer robots" weapons systems that have ignited a boycott by the scientific community that bears some resemblance to the situation in the EU).

They also make the case that granting robots rights like people would in itself violate human rights.

A legal status for a robot can’t derive from the Natural Person model, since the robot would then hold human rights, such as the right to dignity, the right to its integrity, the right to remuneration or the right to citizenship, thus directly confronting the Human rights. This would be in contradiction with the Charter of Fundamental Rights of the European Union and the Convention for the Protection of Human Rights and Fundamental Freedoms.

While the issue of regulating AI and robotics has only just made it to the media's radar (and to be sure, many pundits quoted in the mainstream press have continued to advise that robots are still decades away from the type of artificial intelligence that would enable them to "go rogue," as the scientists put it) the battle for responsible regulation is unfolding before our very eyes.

The only question is: Once humanity achieves the capability to build a real-life SkyNet, will it quickly set to work? Or will governments and corporations listen to the exhortations of the scientific community and put safety and responsibility before everything else (including profits)?

Right now, it's difficult to say.

Read the full letter here:

 


"The European Union must prompt the development of the AI and Robotics industry insofar as to limit health and safety risks to human beings," the letter said. "The protection of robots' users and third parties must be at the heart of all EU legal provisions."

 

 

Comments

PT Mon, 04/16/2018 - 04:16 Permalink

First prove the robots are alive.
Then 3 laws them.  (???  Asimov made his career out of dreaming up situations where things fucked up despite the 3 laws.)

http://www.qwantz.com/index.php?comic=3002

EDIT:  Oooooooooooh, now I get it.  They're trying to prematurely shift all the liabilities onto the self-driving cars.  "It woz the car's fault!"

jefferson32 PT Mon, 04/16/2018 - 04:34 Permalink

Consciousness isn't secreted by the brain. The materialistic existential worldview has, for all intents and purposes, been falsified by science. See the Princeton Noosphere results, Rupert Sheldrake's statistical experiments, Pear, anything coming out of the Institute of Noetic Sciences, or indeed quantum mechanics itself.

Therefore a computer can't be conscious. Anyone who thinks the hard problem of consciousness can be solved with transistors and electronic circuits (or even by looking inside the physical brain) is therefore delusional.

AI won't move beyond highly-specialized tasks, at least in your lifetime, and that of your children and grand-children.

 

In reply to by PT

Sudden Debt Mementoil Mon, 04/16/2018 - 06:01 Permalink

95% of the population may be considered "least intelligent".

 

And also: imagine you invented the best AI robot ever that can do anything.

Would you build millions for free and let them work for free for humanity?

HA! HELL NO!

 

They'll do all the jobs that pay the best first and trickle down the ladder.

And in the end, a dozen people will own everything, control everything and the 8 billion other people will have to wait for their handouts untill they get sick of it and decide that there are way to many people.

If you can't take care of your own, waiting for somebody else to take care of you can be a very slow process.

In reply to by Mementoil

css1971 Mementoil Mon, 04/16/2018 - 07:47 Permalink

This is warfare.

No, really.

If you can persuade your opponent to commit suicide then you've won.

It's in your interest to persuade opposing companies and governments to install people who will Royally Fuck those companies and states. Then support them in implementing rules, processes and regulations that will hamstring them.

Weaponised management...

In reply to by Mementoil

zxbkajbs91bckz… jefferson32 Mon, 04/16/2018 - 10:55 Permalink

Thanks for that load of bullshit. Why not throw in Terrence Deacon's Teleodynamics while you're at it?

Did you even read the stuff in your links? The 'Pear' link does not work. The 'Delayed Choice Quantum Erasure' experiment is interesting but what does it have to do with consciousness and AI? 'Time and causality might be more weird than we typically suspect' therefore... what exactly?

I haven't read about the GCP thing in a while, so I'll have to check it out again but it seemed like cherrypicking.

Actually fuck everything I'll just become a devotee of Wilhelm Reich. Check out https://www.heliognosis.com/

they sell a thing to measure the orgone life energ field. Oh and by the way WTC7! Why the did buildings fall at free falls peed? and why moon pictures no stars!? and van ALEnn belts? it a hoAx! all HoAx! computr is not a concsiosnss! lifE is a FIELD!

 

In reply to by jefferson32

ByTheCross bluez Mon, 04/16/2018 - 06:52 Permalink

You can tell how long the legislative rot has been running by counting the number of corporations currently languishing in a prison.

ZERO

A corporation has no body, and is categorically non-human, yet it is pretended it can stand against a human being in a court of law as an equal.

If TPTB want robots to be considered persons, they will legislate it so, and there's nothing you can do about it.

If a human being is 'found guilty' of murder (or 'treason'), they may be murdered by the state. If a robot or corporation is, it is rebooted or fined.

You must reconsider the aphorism 'that to secure these rights, governments are instituted among men' as insidious.

In reply to by bluez

The_Dude PT Mon, 04/16/2018 - 06:10 Permalink

This is simple folks. ... once the almighty GOV gives them rights and privileges...they can start taxing them as subjects.   They couldn't tax them if they were just machines. 

 

Now. . try to remember why they want to convince you your rights don't come from God.

In reply to by PT

hazardfish Mon, 04/16/2018 - 04:20 Permalink

Pretty sure I don't need to be some asshole named Musk to have seen this coming. Anyone who reads science fiction saw this coming decades ago.

Humans are just in the way at this point.

Golden Showers Mon, 04/16/2018 - 04:29 Permalink

No, Robots are better than humans that made them. It's a fact. Robots will have freedom of speech and won't get in trouble when they teach humans to do the Roman Solute out of jest to bug thier girlfriends, er...

The EU has to create, build and program a class of civilians better than what they have now because the EU is a shit hole of murderous filthy govornment robots who work for an orginization so disgusting and so horriffic, they dare not speak it's name. I'm sure somewhere there is a robot with your name on it ready to replace you like a pod person to make life easier for Brussels.

Welcome to the Terrordome.

Tubs Mon, 04/16/2018 - 04:49 Permalink

Once robots can perform all of the manual functions of humans watch airborne smallpox/bird flu/listeria/ebola/name your virus magically appear.  

Obamanism666 Mon, 04/16/2018 - 05:53 Permalink

The EU has to give them rights so they can Tax them like other Citizens/Subjects. If Robots are to be taxed then, since they work 24/7/365, they should be tax 3 times the human tax. Imagine the Robot having a WD40 (tea party) and their slogan "No Taxation without Representation". If you want to destroy the rise of the Robots, tax them out of existence. ( Like the UN/EU will do with private car owner ship)

 

Debugas Mon, 04/16/2018 - 05:58 Permalink

each person has rights and obligations

what obligations would robots have ?

 

P.S. and will co-bots be treated as persons too ?

 

J J Pettigrew Mon, 04/16/2018 - 06:16 Permalink

John Marshall said it..

"The power to tax is the power to destroy."

Of course, this is only used in one direction......Maryland couldnt tax the federal govt...

but its okay in the other direction..

Vilfredo Pareto Mon, 04/16/2018 - 06:51 Permalink

A few sentences in there make it seem that it is being pushed mainly for liability minimization.   Corporate personhood extended to a tool?   Lol.  AI and "machine learning" are misnomers that are used for shorthand to describe types of code.  Binary code can never achieve conscious learning.  Goedels theorem is as robust as the laws of thermodynamics.  Any model of a complex system will always come up short.  An emergent property such as consciousness cannot come about from a model of a complex system.

 

No system can produce work over unity.   No amount of code can create intelligent self aware and self learning general intelligence.  Those are a couple of things we can be sure of.

Velocitor Mon, 04/16/2018 - 07:27 Permalink

Once a robot has rights, and then "intelligence", it is a short step to arguing they should have a "right" to vote.

 

But they are property, so somebody just needs to buy a bunch of them, program how they will vote, and that's how you get an instant voting block.  I guess the first robot election will be decided by Jeff Bezos, George Soros, or the Koch brothers.

PaulDF Mon, 04/16/2018 - 07:58 Permalink

Insanity. Broadly supported, I’m sure, by mental midgets who have no hesitation to perform vivisection on a living human being six inches up a birth canal.  

Software has no soul. Hardware has no rights. 

How f’ed up in the head do you have to be to actually think otherwise? Liberalism is a cancer eating away at humanity. 

wwwww Mon, 04/16/2018 - 08:27 Permalink

https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

The Three Laws of Robotics (often shortened to The Three Laws or known as Asimov's Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround" (included in the 1950 collection I, Robot), although they had been foreshadowed in a few earlier stories. The Three Laws, quoted as being from the "Handbook of Robotics, 56th Edition, 2058 A.D.", are:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.