Autonomous Drones Will Soon Decide Who To Kill

The United States Army wants to develop a system that can be quickly integrated and deployed into its weaponized drone fleet to automatically Detect, Recognize, Classify, Identify (DRCI) and target enemy combatants and vehicles using artificial intelligence (AI). This is an impressive leap forward, whereas humans still operate current military drones, this technology could foster a new era of autonomous drones conducting operations in hybrid wars — without human oversight.

The project is called “Automatic Target Recognition of Personnel and Vehicles from an Unmanned Aerial System Using Learning Algorithms,” — a very original name, which the details were recently released on the Small Business Technology Transfer (STTR)website. In other words, the Department of Defense (DoD) via the Army is requesting private and research institutions that have developed image targeting AI platforms to form partnerships with them for the eventual technology transfer.

Once the technology transfer is complete, these drones will use machine-learning algorithms, such as neural networks blended with artificial intelligence to create the ultimate militarization of AI. Currently, military drones have little onboard intelligence, besides sending a downlink of high definition video to a military analyst who manually decides whom to kill.

Here is the program’s objective:

“Develop a system that can be integrated and deployed in a class 1 or class 2 Unmanned Aerial System (UAS) to automatically Detect, Recognize, Classify, Identify (DRCI) and target personnel and ground platforms or other targets of interest. The system should implement learning algorithms that provide operational flexibility by allowing the target set and DRCI taxonomy to be quickly adjusted and to operate in different environments.”

A full description of the program:

“The use of UASs in military applications is an area of increasing interest and growth. This coupled with the ongoing resurgence in the research, development, and implementation of different types of learning algorithms such as Artificial Neural Networks (ANNs) provide the potential to develop small, rugged, low cost, and flexible systems capable of Automatic Target Recognition (ATR) and other DRCI capabilities that can be integrated in class 1 or class 2 UASs. Implementation of a solution is expected to potentially require independent development in the areas of sensors, communication systems, and algorithms for DRCI and data integration. Additional development in the areas of payload integration and Human-Machine Interface (HMI) may be required to develop a complete system solution. One of the desired characteristics of the system is to use the flexibility afforded by the learning algorithms to allow for the quick adjustment of the target set or the taxonomy of the target set DRCI categories or classes. This could allow for the expansion of the system into a Homeland Security environment. ”

Once the Army selects a private or research institution in the form of a joint venture, the partnership will allow both entities to further technological innovation in AI drone killing. Before the commercialization of this new dangerous weapon, these three phases must be completed first:

PHASE I: “Conduct an assessment of the key components of a complete objective payload system constrained by the Size Weight and Power (SWAP) payload restrictions of a class 1 or class 2 UAS. Systems Engineering concepts and methodologies may be incorporated in this assessment. It is anticipated that this will require, at a minimum, an assessment of the sensor suite, learning algorithms, and communications system. The assessment should define requirements for the complete system and flow down those requirements to the sub-component level. Conduct a laboratory demonstration of the learning algorithms for the DRCI of the target set and the ability to quickly adjust to target set changes or to operator-selected DRCI taxonomy.”

PHASE II: Demonstrate a complete payload system at a Technology Readiness Level (TRL) 5 or higher operating in real time. On-flight operation can be simulated. Complete a feasibility assessment addressing all engineering and integration issues related to the development of the objective system fully integrated in a UAS capable of detecting, recognizing, classifying, identifying and providing targeting data to lethality systems. Conduct a sensitivity analysis of the system capabilities against the payload SWAP restrictions to inform decisions on matching payloads to specific UAS platforms and missions.”

PHASE III: Develop, integrate and demonstrate a payload operating in real time while on-flight in a number of different environmental conditions and providing functionality at tactically relevant ranges to a TRL 7. Demonstrate the ability to quickly adjust the target set and DRCI taxonomy as selected by the operator. Demonstrate a single operator interface to command-and-control the payload. Demonstrate the potential to use in military and homeland defense missions and environments.”

Interesting enough, The Conversation believes once these AI drones are commercialized, there will be “vast legal and ethical implications for wider society.” Nevertheless, the sphere of warfare could soon expand to include technology companies, engineers, and scientists, who would be labeled as valid military targets because of their involvement in building code for the machines.

The Conversation makes a stunning revelation about the legal implications of Silicon Valley technology firms who provide lines of code to autonomous drone weapon systems. Under the international humanitarian law, “dual-use” facilities – “those which develop products for both civilian and military application – can be attacked in the right circumstances.”

The prospect of totally autonomous drones would radically alter the complex processes and decisions behind military killings. But legal and ethical responsibility does not somehow just disappear if you remove human oversight. Instead, responsibility will increasingly fall on other people, including artificial intelligence scientists.

The legal implications of these developments are already becoming evident. Under the current international humanitarian law, “dual-use” facilities – those which develop products for both civilian and military application – can be attacked in the right circumstances. For example, in the 1999 Kosovo War, the Pancevo oil refinery was attacked because it could fuel Yugoslav tanks as well as fuel civilian cars.

With an autonomous drone weapon system, certain lines of computer code would almost certainly be classed as dual-use. Companies like Google, its employees or its systems, could become liable to attack from an enemy state. For example, if Google’s Project Maven image recognition AI software is incorporated into an American military autonomous drone, Google could find itself implicated in the drone “killing” business, as might every other civilian contributor to such lethal autonomous systems.”

The Conversation reminds us of the recent events of autonomous AI in society “should serve as a warning.”

“Uber and Tesla’s fatal experiments with self-driving cars suggest it is pretty much guaranteed that there will be unintended autonomous drone deaths as computer bugs are ironed out.”

If militarized AI machines are left to the decision-making of who dies...  We ask one simple question: how many non-combatant deaths will count as acceptable to the Army as the AI drone technology is refined?

Comments

khnum Wed, 04/18/2018 - 04:17 Permalink

...and when the AI that runs this figures out that man is a major threat to it the controllers will also become targets,we are indeed stupid enough to invent our replacement

Heros Croesus Wed, 04/18/2018 - 04:26 Permalink

What the creators of Talpiot want is a drone fleet that exterminates bad goyim.  Using facial and electronic recognition and tying back into their titanic databases, a social profile has already been created for practically every white on the planet.  Using the same "algorithms" as the chinese "social credit" system, any dissent will simply be eliminated.

The sad truth is that the vast majority of US zombies will get zapped still believing it is the "moslem terrorists" .

 

In reply to by Croesus

DuneCreature Heros Wed, 04/18/2018 - 04:35 Permalink

Hear, hear!

Never forget or forgive The Talpiot Program.

Or The People Who Wrecked Cyber Security

Or the short and bitter version =

YOU Don't Own Your Computer OR Your Data

Live Hard, Let's Hope AI Gets Smart Enough The Figure Out What Evil Is And Bombs Itself Back Into The RC Glow-Plug Plastic Propeller Age (with water soluble decals included), Die Free

~ DC v8.8

 

In reply to by Heros

khnum Croesus Wed, 04/18/2018 - 04:31 Permalink

Im not liking where this is all going I fired up the computer this morning and there was something on my desktop called Homegroup I asked myself wtf is this I didnt pay for it,ask for it and Ive never bloody heard of it ;is it malware- what is it?.

On my other computer I ascertained it was a Microsoft programme that has lobbed there during an update which is some sort of computer sharing programme.

Do I trust Microsoft,or facebook,or twitter ,hell no,forgive me but I think Im becoming a Luddite. 

In reply to by Croesus

Ink Pusher wise cracker Wed, 04/18/2018 - 04:35 Permalink

SkyNet has been in full operation for a couple of decades.

The hardware and software are just finally making the next evolutionary leap after 3 decades on the drawing board in the labs creating viable AI and compatible reliable military grade robotics hardware.

The problem is; once it gets smarter than us*which won't take very long, our days as a species on this little ball of rock ,water and ice  are literally and sequentially numbered in binary.

In reply to by wise cracker

wisefool Wed, 04/18/2018 - 04:23 Permalink

Everybody thinks the A.I. is amoral. Everybody things A.I. is new.

It started in 1913, but only got feats under its legs in World Ward Mother Fucking Two.

It is called a permanent wartime tax code. The A.I. will fuel itself.

R.I.P. R Lee Emrey. Stanley Kubrick is there and will not make you take the next mission. There is No I.R.S. where you are now.

DuneCreature Wed, 04/18/2018 - 04:25 Permalink

The kill list will be anyone outside of the operations trailer.

Live Hard, So Watch Your Shit Taking A Smoke Break, Airman..... And You Too, Lucy Lips, Die Free

~ DC v8.8

 

bowie28 Grumbleduke Wed, 04/18/2018 - 05:01 Permalink

Just think of what this will do for false flags.  No longer need some whacked out patsy.   Send one of these in to shoot up a bunch of kids in a school and claim it was hacked by the Russians.  Now we just need some AI crisis actor droids to go on CNN after.  DARPA is currently accepting bids for that contract.

In reply to by Grumbleduke

Ink Pusher Wed, 04/18/2018 - 04:27 Permalink

It ain't alive....doesn't have any feelings or morals... but does have the capability to kill on a whim ... allegedly with our praise and blessings .

I wonder if they'll program it with a really snotty "top gun" attitude and voice interface?  LOL

TeraByte Wed, 04/18/2018 - 04:29 Permalink

Hooray, an algo to target the bastards themselves. We can do without it, but they trust in half witted Indians in Silicon Valley with their total incompetence. Ever encountered "John or Mary" offering deir ID.com services from Mumbai. Give dem now a drone and dey will blow up demselves in service for de mankind.

Yogieu Wed, 04/18/2018 - 04:46 Permalink

I have some experience with AI. It's fun and useful stuff. But it's one thing to create an expert system helping with diagnosis of health issues, or automate transport of trains, etc., and another to make decision within a framework with almost unlimited amount of inputs. I am sure it can be done one day, but computing power needed is not available yet in compact size. Problem with neural networks is that they are trained to perform in situations they were presented before, but for any other unexpected input their response/output/decision is unpredictable and usually far away from what intuition of a human being would decide. Problem is also overtraining of a neural network. If it performs great in some situations, it usually performs bad in other situations. If you however train it to work well in most situations, you have lower quality of output in general. Usually output of neural network would probably be some kind of result 0-100%. What is good enough number to kill a human? 89%? 95%? It's sci-fi, and without having a supervisor, I do not like an idea of AI making final decisions on its own. Our consciousness is something what AI does not have, and it will be really hard to simulate it.

TheEndIsNear Wed, 04/18/2018 - 04:46 Permalink

Yes, by all means, let's give these dumb ass bug ridden (ie; programming error prone)  drones control of all of our nuclear missiles. It has been proven that it is impossible to be certain that all computer programming errors have been eliminated from any piece of program code of more than a few dozen lines. Let's entrust not only a few human lives to these pieces of shit programs that kill people while driving their automobiles, but the survival of the human species as well.

“Two things are infinite, the universe and human stupidity, and I am not yet completely sure about the universe.”
~~ Albert Einstein

Yogieu TheEndIsNear Wed, 04/18/2018 - 05:07 Permalink

In case of AI, you can imagine neural network as self-writing code, one which no human being can review or understand. We can only see what is output for given set of input values. If we wanted to dump neural network's knowledge as set of human understandable statements, probably we are talking (for a rather small neural network) about thousands of pages of if-then statements. Good luck in understanding it all. We can get overall idea what AI thinks, but one of these if-then statements may be a disaster.

In reply to by TheEndIsNear

NuYawkFrankie Wed, 04/18/2018 - 05:03 Permalink

re Autonomous Drones Will Soon Decide Who To Kill

Yeah - just type in a cell-phone number of an "undesirable"...

"'Hi Honey - it's me... just above to jump in the car... see in 1/2 an hour"... whoooshhh.... !!!!SPLAT!!!