print-icon
print-icon

As Pentagon Races to Deploy AI, Operational Challenges Highlight Risks

Tyler Durden's Photo
by Tyler Durden
Authored...

Authored by Autumn Spredemann via The Epoch Times (emphasis ours),

Artificial intelligence (AI) is often framed as a force multiplier that can accelerate decision-making and produce valuable information. Meanwhile, AI deployment exercises have yielded mixed results, highlighting challenges such as systems stalling and unpredictable software outside controlled environments.

A U.S. soldier holds a drone in the Pentagon parking lot in Arlington, Va., on June 14, 2025. Samuel Corum/Getty Images

Some defense insiders believe that AI tools also introduce new safety and escalation risks if not developed, evaluated, and trained correctly.

Over the past year, U.S. military testing has demonstrated that some AI systems are failing in the field. In May 2025, Anduril Industries worked with the U.S. Navy on the launch of 30 AI drone boats, all of which ended up stuck idling in the water after the systems rejected their inputs.

A similar setback occurred in August 2025 during the company’s test of its Anvil counterdrone system. The resultant mechanical failure caused a 22-acre fire in Oregon, according to a Wall Street Journal report.

Anduril responded to the reported AI test failures, calling them “a small handful of alleged setbacks at government experimentation, testing, and integration events.”

Modern defense technology emerges through relentless testing, rapid iteration, and disciplined risk-taking,” Anduril stated on its website. “Systems break. Software crashes. Hardware fails under stress. Finding these failures in controlled environments is the entire point.”

But some say the challenges AI faces in the national security landscape should not be taken lightly. Problems such as brittle AI models and building on the wrong kind of training data can create systems that do not perform as expected in a battlefield scenario.

This is why military-grade AI, purpose-built for national security use cases and the warfighter, is critical,” Tyler Saltsman, founder of EdgeRunner AI, told The Epoch Times.

Saltsman’s company has active research and development contracts with the U.S. military. He said AI systems are not typically designed for warfighting.

[AI models] may choose to refuse or deflect certain questions or tasks if those requests do not comply with the AI system’s own rules,” Saltsman said. “A model refusing to provide guidance to a soldier in combat or giving biased responses rather than operationally relevant responses can have life-or-death implications.”

Scenarios such as the one Saltsman described can start with the wrong kind of training data.

A U.S. Army staff sergeant operates an Anduril Ghost X unmanned aircraft system during Exercise Balikatan 25 in Itbayat, Philippines, on April 22, 2025. While artificial intelligence is often framed as a force multiplier, deployment exercises have produced mixed results, including system stalls and unpredictable software performance outside controlled environments. Pfc. Peter Bannister/U.S. Army

Data Dilemma

Jeff Stollman, who has worked with defense contractors as an independent consultant and is familiar with a range of products and services used by the military and intelligence communities, said much of “the data needed has not been collected historically.”

“And because internet data is typically of limited value and internet-based models can’t be run on isolated classified networks, military and intelligence users will need to collect their own new data,” Stollman told The Epoch Times.

He said there are three categories of training data used by the defense and armed forces communities, all of which have different hurdles.

Offering an example of a sustainment—or maintenance—data challenge, Stollman said that collecting this type of information typically requires adding sensors that can record the data needed to predict malfunctions and failures.

“This includes measuring temperature, vibration, friction, the amount of wear on various parts,” he said. “This is an expensive undertaking. Sensors aren’t free. They add weight and volume to space and weight-constrained platforms such as aircraft and spacecraft.”

This type of data collection is offloaded to a database because of limited onboard computer resources. Although that sounds logical at first, the problem is the time it can take.

“For platforms like ships and submarines, windows for transmission of such data, which might give away the position of the platform, are limited,” Stollman said. “As a result, data may not be accessible for months at a time.”

A drone of an AI-based drone system is pictured during a presentation in Eberswalde, Germany, on March 27, 2025. Ralf Hirschberger/AFP via Getty Images

Another challenge of AI integration is reliability. Issues such as AI “hallucinations” and poor decisions can be amplified in adversarial environments.

“The most dangerous assumption is that AI can distinguish between legitimate inputs and adversarial manipulation,” Christopher Trocola, founder of ARC Defense Systems, told The Epoch Times.

He cited the July 2025 experiment in which AI-powered, cloud-based platform Replit’s “vibe coding” ended with an AI assistant panicking and trying to cover its tracks. The AI coding assistant reportedly deleted a live production database, fabricated thousands of fake records, and created misleading status messages.

Military applications amplify these vulnerabilities catastrophically,” Trocola said.

He explained that three critical AI assumptions can fail under adversarial pressure: prompt injection resistance, hallucination control, and intent recognition.

This is when adversaries can manipulate AI through carefully crafted inputs designed to override instructions, generate false information, or indicate that malicious inputs are benign.

This represents what’s known as distribution shift: AI trained in controlled environments failing catastrophically when deployed in real-world adversarial contexts,” Trocola said.

Saltsman said this highlights the importance of building AI models with military applications in mind.

“Most commercial AI systems are black boxes,” he said. “We don’t know what data trained the models. We don’t know what guardrails or biases were baked into the models. And we don’t know if our data is truly secure. All of this is highly problematic in national security settings.”

Risk Evaluation

Stollman noted that generative AI—which is already used in U.S. intelligence and defense—is “plagued” with problems such as hallucinations. However, it is also the most practical kind of AI for military operations.

“Generative AI is useful in areas such as reconnaissance, where it is necessary to identify installations and activities from data collected by various sensors: photos, radar, sonar, etc.,” Stollman said. “It can also be used to support decision-making.”

A consultant instructs the Advanced Artificial Intelligence Command Course at Marine Corps Base Camp Lejeune in North Carolina on Dec. 12, 2025. Lance Cpl. Payton Walley/U.S. Marine Corps

“For example, drones or missiles could be given autonomy of action to overcome signal jamming that prevents their being controlled remotely by humans,” he said. “But before such autonomy can be deployed, it is necessary to anticipate all the failure modes that could lead to undesirable consequences.”

Saltsman said he agrees that AI development and deployment must be carefully balanced with long-term risk evaluation.

But make no mistake, we are in an AI war against China, and we must win the race,” he said.

He noted that if China’s AI models and hardware dominate the market, the United States could become dependent on the Asian nation for critical technologies.

“Therefore, it is a national security imperative that we accelerate the pace of AI development while also balancing the risks,” Saltsman said.

In 2025, the United Nations said that the use of AI in warfighting was no longer a hypothetical future scenario. The U.N. also stressed the risks and consequences of AI system failures in this capacity.

“Without rigorous safeguards, it risks undermining international humanitarian law,” the agency stated.

“Complex battlefields already test human judgment in distinguishing between combatants and civilians; for machines, the challenge is even greater, particularly in urban settings where civilians and fighters often intermingle.”

Xpeng’s next-gen Iron humanoid robot speaks to media during a showroom tour at its headquarters in Guangzhou, Guangdong Province, China, on Nov. 5, 2025. Tyler Saltsman said that if China’s AI models and hardware dominate the market, the United States could become dependent on the Asian nation for critical technologies. Jade Gao/AFP via Getty Images

Trocola said he shares concerns that AI deployment in the military and defense sectors is outpacing risk assessment.

“Documented patterns suggest this creates systematic vulnerabilities,” he said. “Industry data shows [70 percent to 80 percent] of AI projects fail due to organizational readiness gaps.”

The Department of War AI Acceleration Strategy launched in January, which emphasizes rapid deployment to counter strategic competitors.

Read the rest here...

Loading recommendations...