Dozens of autonomous war machines capable of deadly force conducted a field training exercise south of Seattle last August. The exercise involved no human operators but strictly robots powered with artificial intelligence, seeking mock enemy combatants.
The exercise, organized by the Defense Advanced Research Projects Agency, a blue-sky research division of the Pentagon, armed the robots with radio transmitters designed to simulate a weapon firing. The drill was conducted last August and expanded the Pentagon's understanding of how automation in military systems on the modern battlefield can work together to eliminate enemy combatants.
"The demonstrations also reflect a subtle shift in the Pentagon's thinking about autonomous weapons, as it becomes clearer that machines can outperform humans at parsing complex situations or operating at high speed," according to WIRED.
Its undeniable artificial intelligence will be the face of warfare for years to come. Military planners are moving ahead with incorporating autonomous weapons systems on the modern battlefield.
General John Murray of the US Army Futures Command told an audience at the US Military Academy in April that swarms of robots will likely force the military to decide if a human needs to intervene before a robot engages the enemy.
Murray asked: "Is it within a human's ability to pick out which ones have to be engaged" and then make 100 individual decisions? "Is it even necessary to have a human in the loop?" he added.
Michael Kanaan, director of operations for the Air Force Artificial Intelligence Accelerator at MIT and a top voice on artificial intelligence in the military, told a crowd at the conference in the Air Force last week that computers are rapidly evolving in how they are identifying and distinguishing potential targets while humans decide to engage.
Lieutenant General Clinton Hinote, deputy chief of staff for strategy, integration, and requirements at the Pentagon, who was also speaking at the same event, said the great debate of the early 2020s is whether a soldier can be removed from the decision-making of an autonomous weapon.
Timothy Chung, the Darpa program manager in charge of the swarming project, told WIRED that last year's exercise was to explore when a human should be involved in the decision-making of autonomous systems. When faced with complex attacks, Chung said the robots could perform the mission better than humans because people aren't quick enough to react.
"Actually, the systems can do better from not having someone intervene," Chung added.
Even as artificial intelligence is rapidly developing capability, keeping a person in the loop may be necessary for the time being as algorithms still need to improve where they can identify enemies with enough reliability.
This all comes down to the reliability of the algorithms to have a high level of accuracy when identifying and engaging the enemy. So far, the Defense Department policy on autonomous weapons states that these systems need to have human oversight.
... and for some forecasted timelines of when robots surpass human intelligence, Ray Kurzweil, Googles chief of engineering, said a few years back, it could be around 2029.
By the end of the decade, or even earlier, the Pentagon may allow autonomous weapons to take the kill shot without any human intervention.