THE AMERICA ONE NEWS
Feb 24, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET AI 
Sponsor:  QWIKET AI 
Sponsor:  QWIKET AI: Interactive Sports Knowledge.
Sponsor:  QWIKET AI: Interactive Sports Knowledge and Reasoning Support.
back  
topic


NextImg:Build Allied AI or Risk Fighting Alone

View Comments ()

It’s 2029. From wildfires in California to catastrophic flooding in Pakistan, natural disasters are more common than ever—and hit harder. But amid fire and flood, advances in artificial intelligence enable the United States and other countries to deploy self-flying drones to find and rescue survivors, use machine-learning algorithms to streamline the delivery of lifesaving aid, and automate translation in real time to aid multinational coordination. This future vision of more efficient, collaborative, and effective military cooperation is attainable, but only if we act now.

The United States and its allies are increasingly incorporating rapidly advancing AI-enabled technology into their militaries to solve key operational problems, speed up responses, save lives, and even deter threats. But each nation is developing its own capabilities; incorporating these systems into military activities at different paces; and creating its own policies to dictate when, where, and how military AI can be employed.

Washington and its allies must build a shared framework for the collective use of military AI. Failing to do so will risk the United States’ ability to operate alongside other nations against future threats ranging from natural disasters to great-power conflicts.

Military AI is already being developed and deployed. The United States is building more than a thousand collaborative combat aircraft, which it describes as self-flying “loyal wingman” planes meant to support crewed fighter jets. Both Ukraine and Israel have claimed to use AI to analyze open-source data to identify targets for military strikes, while Bloomberg reported in 2024 that the United States has used AI to enable target selection in the Middle East. Prospective use cases abound in military medicine, logistics, maintenance, and personnel management.

But all these countries are branching off in their own directions. As with military hardware such as fighter jets, AI systems developed by different governments are not necessarily compatible. There is a risk of countries adopting different development paths and creating siloed systems.

The United States rarely fights wars alone, preferring collective action to help reduce the burden on U.S. forces. Washington must bring along its allies and friends to achieve its vision of AI-enabled interoperable military forces for military coalitions to even be possible in the future.

The importance of interoperability—the ability of countries to conduct military operations together—was abundantly clear during a recent tabletop exercise that we ran for the U.S. Defense Department-led AI Partnership for Defense. This fictional exercise brought together government officials from more than a dozen nations to explore how AI could be employed in future military operations in 2029, and it represented a level of cooperation in AI employment that does not exist today.

The exercise demonstrated that military AI has enormous potential to improve coalition military operations and save lives—but only if the United States and other countries take steps now to build interoperability by adopting, integrating, and employing these capabilities together.

In future high-speed conflicts, the United States and allied militaries will need to share large volumes of data parsed by AI to identify targets and help connect one military’s sensors to another’s shooters, which may be autonomous uncrewed platforms carrying out strike missions. But this more efficient vision of future warfare will not be possible if the United States and its allies fail to align their military AI investments, strategies, and employment.

Interoperability has long been a major challenge for military coalitions—one that even the most advanced forces have struggled to achieve. Multinational military operations are complex, requiring the integration of diverse equipment, rules of engagement, and skills. In past operations, these differences have slowed down decision-making, shifting the weight of effort to a select few countries and resulting in less effective outcomes.

In Operation Inherent Resolve, researchers from the RAND Corp. found that the pace of airstrikes against the Islamic State was slowed by different interpretations of acceptable targets among coalition nations, ultimately pushing the bulk of strike missions on to the United States.

The advent of AI presents new challenges for interoperability. Across nations, there is varied understanding of this technology and no consensus among officials on how to develop and employ AI and autonomous systems. Various frameworks, guidance, and standards abound as each nation grapples with its own interpretation of its legal and ethical obligations.

Key to interoperability is having shared capabilities that can communicate, interact, and work together. But right now, nations are going down their own paths, developing unique systems that can’t easily communicate or share data—a process that may also be hampered by long-standing security restrictions.

For example, computer vision applications, which enable the autonomous identification of potential threats, may struggle in a coalition operation. AI systems developed by different nations may contradict each other and thus slow down the targeting cycle rather than speeding it up. Even systems developed using the same training data—the fuel that computer vision software relies on to learn about the world—are not guaranteed to perform identically due to other design parameters. If multiple countries employ autonomous systems during an operation, what should be done to ensure that they identify targets in a consistent way?

Autonomy also presents new challenges. Unlike existing military platforms—in which human operators can communicate, resolve disputes, and coordinate their actions to avoid accidents—autonomous systems will need to be programmed to operate together without relying on the common sense or relationships developed between human operators. Automating this historically human behavior and problem-solving is a new process, and it may not be possible when conflicts between autonomous systems emerge during fast-paced operations.

There are also critical issues that countries have not yet sufficiently grappled with, such as whether they are comfortable with another country’s autonomous capabilities operating alongside platforms crewed by their military personnel, or AI creating mission plans for their forces to undertake. While undoubtedly context-dependent, national capitals reviewing every operation would lose valuable time and slow down operations during a crisis, when speed is essential—ceding the very benefit that autonomy is meant to provide.

In the face of the perceived risks of AI and autonomy, it may seem easiest to throw up our hands and abandon these emerging capabilities, but nations must understand the opportunity cost of failing to employ them.

We saw this in our tabletop exercise, where participants had to weigh whether to employ a crewed capability over a more effective AI-enabled system. The trade-off was stark: accept risk in hopes of moving faster and saving more lives or decline risk by using familiar platforms and sparse, well-trained operators, knowing that it could result in dramatically fewer lives saved.

The United States and its allies will need to make these cost calculations and determine where and when they are willing to take risks. As we are keenly aware, adversaries such as China are already pursuing these technologies, and the United States and its allies risk losing their military technological edge if they cannot safely harness AI.

In Washington, D.C., and capitals around the globe, leveraging military AI is and will remain a critically important challenge. The pace and scale of AI development are rapidly growing and, along with them, the difficulty of interoperability. The Trump administration should prioritize military AI interoperability, particularly if it wants to prioritize efficiency and encourage other to take on greater responsibility in future crises. The administration must work with other governments to smooth out differences around how defense AI systems are developed, maintained, and deployed to reap the benefits of them.

Washington and its allies must rectify their different policy perspectives, guidance, and risk assessments for employing AI and autonomous systems well before a conflict begins. Efforts such as the AI Partnership for Defense are critical steps that have laid the groundwork for collaboration, but more work must be done.

Creating military AI interoperability between the United States and other nations is a tall order. Without it, nations will struggle to harness the benefits of AI in future coalition operations and may not be able to effectively respond to crises or deter threats. By prioritizing AI interoperability, the United States and allied nations can lay the groundwork for effective military operations and a more secure future.