Contemplating the AI-enabled future of war, practical and philosophical questions offer few easy answers. What kind of devices and weapon systems would most benefit from AI (artificial intelligence)? What types of oversight should apply to its use? What are the moral/ethical questions in play? How will humans and AI coexist in the battlespace?
In chapter three of the National Security Commission on Artificial Intelligence’s final report, the commission postulates that “AI-enabled warfare will not hinge on a single new weapon, technology, or operational concept; rather, it will center on the application and integration of AI-enabled technologies into every facet of warfighting.”
This is important framing to understand: “every facet of warfighting.” AI is a transformative technology that will fundamentally change how the Department of Defense (DOD) organizes and operates our military in wartime and peacetime alike. It will utilize and enable the ever-expanding information field produced in the battlespace to develop a new way of fighting wars.
Now is the time to prepare our battle network architecture for this future. Recognizing this, the DOD initiated the Joint All Domain Command and Control (JADC2) as its overarching modernization concept for connecting sensors and systems into a unified network across the Air Force, Army, Marine Corps, Navy, and Space Force. The end goal of JADC2 is the holistic integration of the military’s sensor data, processing power, and artificial intelligence into one agile and resilient network.
JADC2 will enable better decision support for commanders and optimize the effects of their decisions in the battlespace. As more platforms, sensors, and communication networks join the unified network and combine with dynamic AI learning capabilities— producing an exponential ‘network of networks’ effect—the DOD envisions a warfighting force that will outperform and outmaneuver its adversaries in the data-driven 21st century.
Pivotal Issues for the Current Moment
While an undeniably critical initiative, questions about how each service’s respective JADC2 inspired initiatives— like the Air Force’s Advanced Battle Management System, the Army’s Project Convergence, and the Navy’s Project Overmatch—can effectively join together need to be addressed.
If each service modernizes its networks aligned to its own requirements but fails to fully coordinate actions together to achieve interoperability and unified network resilience, this will jeopardize the DOD’s ability to counter AI-enabled adversaries in the 21st-century battlespace. Likewise, the DOD would be well-served to avoid implementing hardware and proprietary source code-dependent solutions that create vendor-lock in favor of technology-agnostic software-based solutions that allow for continuous optimization of battle networks.
AI’s ability to become a force multiplier on and off the battlefield comes from its capacity to identify patterns and detect items at scale that would otherwise be obscured within large datasets. AI will make it easier to see, assess, and connect objects and information across the battlespace. Amid the great power competition, the advantage will be determined by which side uses data better: before, during, and between engagements. Who will collect, integrate, and operationalize data well enough to make AI in the battlespace reliable when it matters most?
China is already investing heavily in AI innovation—at least in part to help reinforce its ability to retain power and quell social unrest, but certainly also to continue expanding its global military capabilities. China’s AI trajectory is formidable and will challenge the United States’ leadership in the field in the 21st century. To ensure that the AI future reinforces the values of our free and fair society, we will need to partner with fellow democracies and the private sector to build privacy-protecting standards into AI technologies.
The goal for the Department of Defense is to win the marathon, not just the first few miles. To ensure that the United States continues to lead the world with democratic values and respect for human rights in the AI era, the NSCAI final report urges the DOD to “act now to integrate AI into critical functions, existing systems, exercises, and wargames to become an AI-ready force by 2025.”
How AI Will Be Used on the Battlefield
In many ways, the impact of AI on the battlespace itself will be manifested indirectly, well before any fighting begins. Looking at the solutions already proven ready in commercial settings, we can predict how the DOD will leverage them in the future.
For example, AI will allow military decision-makers to make better and faster decisions across a wide range of applications. AI-enabled analysis can help departments use time and money resources more efficiently, removing friction from innovation, development, and testing cycles.
AI will improve the military’s training ability by collecting and analyzing data that helps leaders run wargames and assess the preparedness of fighting systems under dynamic ‘what-if’ conditions, including countering red-force AI threats.
AI will reduce manual data processing through natural language processing for many essential applications, including translation, decryption, intelligence gathering, and equipment maintenance. It will speed up the dissemination of intelligence, pre-process raw intelligence from the field to filter it for importance and type, and identify connections and correlations exponentially faster and more accurately than human analysts can perform such tasks. These benefits will relieve much of the cognitive burden, information overload, and repetitive taskwork that overwhelms personnel across the services today, allowing them to focus on the signal instead of the noise.
Strategic impacts will come from decision support on the battlefield as planners and commanders are assisted by AI modeling using real-time data sets. AI will help units coordinate movements of troops onto and off of the conflict zone and analyze any downstream impacts. Process automation will streamline maintenance and supply chain actions, as well.
AI-optimized sensing and anomalous behavior recognition will instantaneously pass the most relevant information to warfighters during engagements with military targets, helping them make faster and more informed decisions in crucial seconds.
Finally, AI-enabled systems will change how we target the enemy, with intelligent weaponry attached to autonomous and semi-autonomous platforms to precisely attack the enemy while safeguarding blue forces and civilian casualties inside combat zones.
The NSCAI final report articulates clear guidelines for using AI-enabled and autonomous weapon systems in combat to be consistent with our values. They should only be used if authorized by a human commander or operator, properly designed and tested, and used in ways consistent with international humanitarian law. At all times, human judgment must be layered into and operationalized within any AI-enabled or autonomous system for our military. It is of utmost importance that human operators be the final arbiters of kinetic effects at the end of the sensor-to-shooter kill chain.
AI and Humans: Stronger Together
As AI adoption continues in the commercial sector, data scientists and software developers will find new ways to make it more effective, explainable, and trustworthy in use cases that carry over to DOD implementations. Over time, AI-enabled solutions will surpass human capacities to perform specific tasks on the battlefield that speed up decision-making and automate processes at tactical and operational levels.
This does not mean we will one day hand over national security to machines entirely. Far from it: humans’ capability for adaptive learning and judgment is not replaceable. The best of both worlds’ use case is to pair humans with AI technology that complements, rather than replaces, the role of humans. Again from the NSCAI final report: “AI tools will improve the way service members perceive, understand, decide, adapt, and act in the course of their missions. However, new concepts for military operations will also need to account for the changing ways in which humans will be able to delegate increasingly complex tasks to AI-enabled systems.”
An instructive example comes from the world of chess, a game commonly linked to military strategy and tactics. In 1997, the world champion grandmaster Gary Kasparov famously fell in a series of matches against Deep Blue, an AI-powered supercomputer, the first time this had ever happened. Afterwards, Kasparov helped popularize a new form of “advanced chess” in which chess masters teamed with computers to play matches against other chess masters. This evolved into a ‘freestyle’ chess tournament in 2005, where humans, computers, and human-computer teams called ‘centaurs’ all competed together. The surprise result was that the tournament’s winner was not one of the several grandmasters playing in the tournament, nor the world’s best chess supercomputer of the time, called Hydra. The winner was one of the ‘centaur’ teams, made up of two amateur chess players and three non-super computers.
The lesson of this story is not to fear AI or to put it on a pedestal as the answer for all problems. Instead, we should be ready for AI’s inevitable presence on the global chessboard and use our own human-plus-AI centaur teams to execute a winning strategy for the AI era.
This article was written by Logan Jones, President & General Manager, SparkCognition Government Systems (Austin, TX). For more information, visit here .