ALGORITHMIC WARFARE ARMY NEWS
Army Building Ecosystem to Train, Verify AI Models
By Josh Luckenbaugh
iStock illustration
WASHINGTON, D.C. — The Army is building an environment in which it can verify the functionality of artificial intelligence and machine learning models before they are deployed, service officials said at a recent industry conference.
Lt. Gen. Anthony Hale, deputy chief of staff of the Army, G-2, said the service operates in a “VUCA world,” meaning one that is volatile, uncertain, complex and ambiguous.
“We could be in a fight tonight in three of our six combatant commands: Indo-Pacom, Centcom and Eucom,” Hale said during a panel discussion at the Association of the United States Army’s annual meeting.
“With this growing complexity, we also see an increase in available data — actually, we are drowning in data,” he said. AI can help the Army swim out from this deluge, he added.
“We must learn to leverage AI to organize the world’s information, reduce manpower requirements … and position our people for speed and accuracy in delivering information to the commander for decision dominance,” he said. “Information advantage over the adversary is foundational to battlefield success.”
Brig. Gen. Ed Barker, program executive officer for intelligence, electronic warfare and sensors, said programs across the Army contain “specified or implied requirements when it comes to AI and ML … and what we realized is that there’s going to be a lot of opportunities where we’re going to have to leverage each other’s work.”
The service is constructing an AI and machine learning ecosystem that will provide a trusted, safe environment where Army program managers and elements can bring in AI models and test them against curated and trusted data. Through this ecosystem, the service can train and verify the models and “understand if there’s any types of drift or things getting out of tolerance from those models, and then [redeploy] them back into their environment,” Barker said.
This environment could test out AI models with a variety of applications, from automatic target recognition systems for the XM30 Mechanized Infantry Combat Vehicle to models that would allow electronic warfare officers to characterize a new signal of interest, he said.
The initial use cases for the ecosystem will involve helping U.S. Army Pacific with intelligence missions in partnership with the National Reconnaissance Office, National Geospatial-Intelligence Agency and the Chief Digital and Artificial Intelligence Office, he said.
Brig. Gen. Roy Crooks, director of Army Futures Command’s long-range precision fires cross-functional team, said the processing, exploitation and dissemination, or PED, of intelligence data is seen as a potential limiting factor “to closing joint kill webs at [the] speeds and scale necessary for” large-scale combat operations.
To have effective fires targeting, “you need PED on the back end of target engagement to assess as well as define or to detect your target,” and “we have challenges in trying to scale this up” with human operators alone, Crooks said.
AI and machine learning can expand the Army’s processing, exploitation and dissemination capabilities, but the service must ensure it is playing to the “intrinsic strengths” of both the technology and the human operator, he said.
“When you look at AI, its intrinsic strength is its ability to recognize patterns and the ability to take in masses of data to recognize patterns in ways that really aren’t encumbered by human evolution biases,” he said.
“If pattern recognition is AI’s strength, then the human strength is the ability to … contextualize all those patterns,” he continued. “All those things that AI will offer up as potential targets, the humans will be able to contextualize that and apply intuition, apply empathy if it is to understand and decipher human-driven behaviors, and those two together will allow us to expand PED as a limited resource both on the front end of our target engagement and on the back end of it.
“Once we master this, then I think we can start to scale up to what’s necessary for large-scale combat operations to close our joint kill webs,” he said.
Barker said that’s exactly what the AI and machine learning ecosystem will help U.S. Army Pacific and its partners do: “get at the PED problem” and “not place that burden on our analyst and really allow that analyst to get at the high-level analysis that we want them to do, to provide that context … that really only the human can do.”
Additionally, the ecosystem will have the necessary security protocols in place “to address any type of counter-AI when it comes to contaminating data,” Barker said. China is investing heavily in counter-AI capabilities, so as it builds the environment, the Army has a “laser focus … when it comes to ensuring that everything that comes out of there is trusted, and we have the means to monitor it and make sure that it remains trusted.”
The AI ecosystem program is scheduled to be formally established and resourced in 2026, and the Army is currently doing internal risk reduction to ensure the service is “ready to hit the ground running,” he said.
“AI presents opportunities for progress more than any other technology we have seen in the last few decades,” Hale said. “It’s going to help maximize our most precious resource for the Army, and that’s our people.
“The VUCA environment is ripe for exploitation, and AI is in a position to help our Army to succeed on … the battlefield and to ensure the warfighter has more information to defeat the enemy,” he said. ND
Topics: Cyber