Adquisición de capacidades de IA para uso militar

Los Estados han considerado durante mucho tiempo el aprovechamiento de los beneficios militares de la IA como un imperativo estratégico. La adquisición de capacidades de IA es un desafío por varias razones, entre ellas la escasez de personal con conocimientos de IA en las fuerzas armadas actuales y el hecho de que los proveedores de capacidades de IA son diversos e incluyen actores no tradicionales , como las empresas tecnológicas emergentes. Esto complica los esfuerzos para adaptar los procesos de adquisición para la IA y, en algunos casos, aumenta el riesgo.


In a tense geopolitical environment and amid an apparent artificial intelligence (AI) ‘arms race’, militaries seeking to integrate AI capabilities are grappling with two important questions. The first is how to expedite the deployment and scaling up of these novel and rapidly developing technologies, which are now seen as essential for strategic dominance. The second is how to implement commitments to responsible use of AI in the military domain.

The procurement of AI capabilities is challenging for a number of reasons, not least the shortage of AI-literate personnel in today’s militaries and the fact that suppliers of AI capabilities are diverse and include non-traditional actors such as technology start-ups. This complicates efforts to adapt procurement processes for AI and in some cases increases risk. For example, decentralizing some AI procurement decisions to individual units or commands could reduce red tape, but it could also mean the decisions are not reviewed by AI-literate officials able to see through industry hype. Opting for industry-led off-the-shelf solutions may be much quicker than having suppliers develop new systems to meet the military’s specifications. But having less control over the design, development and testing of new AI capabilities increases the risk of deploying systems that do not meet forces’ needs or are even substandard or unsafe. 

Principles of responsible behaviour are relevant not only to the use of AI capabilities but also to the processes by which those capabilities are procured. How then can militaries’ efforts towards streamlining procurement be kept in step with their efforts to develop and implement principles of responsible behaviour in relation to AI in the military domain? 

This essay examines why and how states must align steps to adapt procurement processes for military AI with principles of responsible behaviour. 

Commitments to responsible use of military AI

States have long viewed harnessing the military benefits of AI as a strategic imperative. In 2017 Russian President Vladimir Putin said that ‘the one who becomes the leader in this sphere will be the ruler of the world’. The United States Department of Defense’s 2018 Artificial Intelligence Strategy noted that incorporating AI capabilities into weapon systems was essential to national defence against near-peer adversaries and that other states’ investments in AI ‘threaten to erode our technological and operational advantages’. That threat perception has only heightened in recent years, and states are now rushing to adopt military AI applications, from the ‘back office’ to the front line. Areas where AI is expected to generate important opportunities in the military domain include intelligence, surveillance and reconnaissance (ISR); maintenance and logistics; command and control (including targeting); information and electronic warfare; and autonomous systems.  

The governments of France, the USA, the United Kingdom and Japan, as well as the European Parliament and NATO, have set out the principles that are meant to guide their responsible (sometimes termed ‘ethical’) behaviour in relation to military AI. Canada committed in 2024 to develop ‘AI ethics principles’. 

Many more states have endorsed international sets of principles of responsible behaviour concerning military AI: the USA-led political declaration on responsible military use of AI and autonomy of 2023, the Netherlands and South Korea-led Blueprint for Action adopted at the Responsible AI in the Military Domain (REAIM) Summit in 2024, and the Paris Declaration on Maintaining Human Control in AI Enabled Weapon Systems adopted at the Artificial Intelligence Action Summit held in Paris in February this year. 

These various documents range from high-level political statements to strategic documents to technical guidance, but they share some features. For example, when referring to the norms that states should adhere to when using military applications of AI, many of the documents refer to the need to act ‘responsibly’, ‘lawfully’ or ‘ethically’, to ensure that AI capabilities are safe and reliable, to minimize and mitigate the effects of bias, and for personnel to have sufficient understanding of the technology (for example, by ensuring technology is appropriately transparent, explainable or auditable). They also underline the importance of being able to hold personnel responsible and accountable for the consequences of AI use. 

Adherence to these principles of responsible behaviour requires aligning military doctrine, tactics, techniques and procedures relevant to AI capabilities, to ensure that the principles are implemented in practice. States must ensure not only that capabilities are used in a way that meets the requirements set out in the principles (for example, that a military AI capability is used in accordance with international law), but also that the procurement process itself satisfies these requirements (for example, that a military AI capability is subject to a legal review prior to employment). 

The tension between ‘streamlined’ and ‘responsible’ procurement 

Adapting military procurement processes for AI capabilities is one area where there is the potential for tension. Streamlining procurement pathways to facilitate adoption of AI may stand at odds with the kind of administration demanded by principles of responsible behaviour.

When it comes to adapting military procurement processes for AI, there are three main reasons why principles of responsible behaviour should remain front-of-mind for military procurement actors. First, design decisions that affect how military AI capabilities can be used (including whether they can be used responsibly) are often made as part of the procurement process. Second, suppliers of military AI capabilities—whether they are established defence industry companies or tech start-ups—need clarity and certainty about what military clients expect from the AI capabilities they procure. Third, measures critical to implementing principles of responsible behaviour may be co-timed with or form part of the procurement process. This third reason, however, requires further examination, to determine exactly what principles of responsible behaviour require of militaries at the procurement stage. 

Understanding responsible behaviour in the procurement of military AI capabilities

The US Department of Defense encapsulated the tension between the imperatives of streamlined and responsible AI procurement in its 2022 Responsible Artificial Intelligence Strategy and Implementation Pathway, where it recognized the need to ‘exercise appropriate care in the AI product and acquisition lifecycle to ensure potential AI risks are considered from the outset of an AI project . . . while enabling AI development at the pace the Department needs to meet the National Defense Strategy’.

Some implications of principles of responsible behaviour in the procurement process are readily identifiable. One that relates to the principle of lawfulness (meaning that military AI capabilities will be developed and used in accordance with national and international law) is the need to conduct a legal review of the AI capability under consideration, to ascertain whether it can be used in accordance with the state’s legal obligations. And when it comes to principles of safety and reliability, rigorous and independent testing and evaluation of AI capabilities are needed at the procurement stage. 

Fostering a relationship with suppliers that ensures appropriate information sharing in the design and improvement of AI capabilities, including through contracts, is also important for implementing principles related to safety and reliability at the procurement stage, especially in view of the non-linear and iterative nature of AI capability development. One implementation measure that is relevant to the principle of bias minimization or mitigation is careful management of data that is used in the development and testing of AI capabilities. Finally, the engagement of the intended user of an AI capability in the procurement process can facilitate the user’s understanding of those AI capabilities (not to mention support responsible use of those capabilities if procured). 

Other aspects of principles of responsible behaviour and the procurement process are more difficult to connect. One is that the technical performance of an AI capability can be opaque, as well as resource-intensive and difficult to test and evaluate. This affects a procurement actor’s ability to fulfil principles of safety and reliability. Another is that the development of AI capabilities is often iterative and compressed. Cutting-edge AI capabilities are developed through a rapid process of design, testing and refining based on feedback. This challenges traditional procurement processes that are often linear and time-consuming as well as principles of responsible behaviour that are premised on an expectation of risk avoidance, mitigation and management.

An opportunity to act

Some militaries and ministries of defence are already adapting or planning to adapt their procurement practices to respond to the need for rapid adoption of AI capabilities. Ukraine, for example, has implemented widespread procurement reforms, including in relation to the procurement of advanced technologies. These reforms are aimed at streamlining and accelerating procedures, from planning to fulfilment (although officials have acknowledged that the rapid pace of procurement based on front-line requirements can complicate long-term procurement planning). Other states are watching how Ukraine adapts its technology procurement practices to extreme operational needs. Sweden, for instance, has authorized its defence procurement agency to cooperate with Ukrainian authorities on procurement matters. 

The US Department of Defense’s Responsible Artificial Intelligence Strategy and Implementation Pathway includes actions relating to acquisition. A special report adopted by the NATO Parliamentary Assembly Science and Technology Committee last year acknowledges that procurement processes need to be adjusted in order to facilitate the integration of AI in armed forces. China’s military–civil fusion strategy seeks seamless integration of technologies from the civilian realm into the military realm, which entails experimentation with new AI capabilities. 

The focus on modifying procurement processes to facilitate adoption of AI capabilities provides an opportunity for states to implement principles of responsible behaviour across the different aspects of procurement: in process design, in setting criteria for assessing procurement options, in deciding who should be involved in procurement decisions, in the selection of the suppliers involved, and in the time frames.

States can seize this opportunity by furthering interstate exchanges on the relationship between responsible military AI and procurement processes, including in relevant forums such as future REAIM summits or the United Nations General Assembly (following the UN Secretary-General’s recommendation in August 2025 that states establish a ‘dedicated and inclusive process to comprehensively tackle the issue of AI in the military domain and its implications for international peace and security’). 

States can also learn from experience of AI-related procurement in the civilian sector. Standards developed for state procurement of AI in the civilian domain can often provide guidance, particularly where a state does not have military-specific guidelines for AI procurement. Examples include the World Economic Forum AI Government Procurement Guidelines and the IEEE Standard for the Procurement of Artificial Intelligence and Automated Decision Systems, as well as similar documents at the national level. Finally, states can work with suppliers to develop a common understanding of responsible procurement practices. High-level political and strategic documents that set expectations related to responsible behaviour need to be translated for suppliers, as part of the procurement process.

Accelerating the adoption of military AI capabilities should not be at the cost of compromising commitments to responsible behaviour. Aligning procurement processes with principles of responsible behaviour is essential for assuring and maintaining trust in military AI among government leaders and armed forces personnel as well as the populations they protect.

Fuente: https://www.sipri.org