Sistemas de armas autónomos letales: un cambio de juego que exige regulación

En los últimos años, países como Estados Unidos, Reino Unido, India, Israel, Irán, Corea del Sur, Rusia y Türkiye han invertido mucho en la integración de la Inteligencia Artificial (IA) en sus plataformas armamentísticas. El despliegue de un Kargu-2 de fabricación turca en Libia en 2020 marcó el comienzo del despliegue de Sistemas de Armas Autónomas Letales (LAWS) en el campo de batalla. El uso de LAWS ha generado serias preocupaciones, ya que no existe ningún mecanismo regulatorio internacional o marco legal para regir el desarrollo, despliegue y empleo de tales sistemas de armas.


In recent years, countries like the United States, the United Kingdom, India, Israel, Iran, South Korea, Russia and Turkiye have heavily invested in integrating Artificial Intelligence (AI) into their weapons platforms. The deployment of a Turkish-made Kargu-2 in Libya in 2020 marked the dawn of deployment of Lethal Autonomous Weapons Systems (LAWS) on the battlefield. The use of LAWS has raised serious concerns, as there is no existing international regulatory mechanism or legal framework to govern the development, deployment, and employment of such weapon systems.

The rise of AI in the military domain is rapidly changing the face of warfare, as AI-enabled weapon systems potentially diminish the meaningful role of human decision-making. As defined by Nils Adler (2023) in an article publish by Al Jazeera English “autonomous weapon systems can identify their targets and decide to launch an attack on their own, without a human directing or controlling the trigger.” There is a global consensus that “cutting-edge AI systems herald strategic advantages, but also risk unforeseen disruptions in global regulatory and norms-based regimes governing armed conflicts.”

Experts and scholars believe that AI-enabled weapon systems will have a major impact on warfare, as the full-autonomy of weapon systems would negate battlefield norms established over the course of centuries. According to the European Research Council (ERC), “militaries around the world currently use more than 130 weapon systems which can autonomously track and engage with their targets.”

Despite advancements in this domain, there is no globally agreed definition on what constitutes a lethal autonomous weapon system; the question of autonomy on the battlefield remains subject to interpretation. The US Department of Defense (DOD) defines LAWS as “weapon systems that once activated, can select and engage targets without further intervention by a human operator.”  Such a concept of autonomy in weapon systems is also known as ‘human out of the loop’ or ‘full autonomy.’ In a fully autonomous weapon system, targets are selected by the machine on the basis of input from AI, facial recognition, and big data analytics, without any human crew.

Another category of autonomy in weapon systems is semi-autonomous or ‘human in the loop’ weapon systems. Such weapons are self-guided bombs and missile defence systems that have existed for decades.

The rapid advancement in the use of LAWS has created the need to develop a regulatory framework for the governance of these new weapon systems. Accordingly, various states have agreed to enter into negotiations to regulate and possibly prohibit LAWS. The United Nations Convention on Certain Conventional Weapons (UN CCW) has made several efforts in this direction by initiating an international dialogue on LAWS since 2014. The Sixth Review Conference of UN CCW in December 2021 was concluded with no positive outcome on the legal mechanism and an agreement on international norms governing the use of LAWS. Despite the stalemate, there was consensus that talks should continue.

In 2016, a Group of Governmental Experts (UN GGE) was also established with the mandate to discuss and regulate LAWS. But the participating countries have yet to make headway on a legal framework to regulate and proscribe the development, deployment, and the use of LAWS.

As the world advances in the use of AI in the military domain, states have moved to take divergent position in the UN CCW on the question of development and use of LAWS. Incidentally, certain countries would only find it in their interest to sit down for arms control measures once they have achieved a certain degree of technological advancement in this domain.

It should be noted that major powers such as the United States, China, Russia, and the European Union (EU) either do not outright prohibit or have sought to maintain ambiguity on the matter of autonomous weapons. A US Congressional Research Service report, updated in February 2024, highlighted that “the United States currently does not have LAWS in its inventory, but the country may be compelled to develop LAWS in the future if its competitors choose to do so.”

China is the only P-5 country in the UN CCW calling for a ban on LAWS, stressing the need for a binding protocol to govern these weapon systems. China at the UN CCW debates has maintained that “the characteristics of LAWS are not in accordance with the principles of international humanitarian law (IHL), as these weapon systems promote the fear of an arms race and the threat of an uncontrollable warfare.”

Russia remains an active participant in discussions at the UN CCW, opposing legally binding instruments prohibiting the development and use of LAWS.

The EU has adopted a position in accordance with IHL which applies to all weapon systems. The EU statement at GGE’s meeting in March 2019 stressed the centrality of human control on the weapon systems. EU maintains that human control over the decision to employ lethal force should always be retained.

World leaders, researchers, and technology leaders have raised concerns that the development and use of LAWS will adversely impact international peace and security. In March 2023, leaders in various high-tech fields signed a letter calling for a halt in the development of emerging technologies for the next six months. The letter warned of the potential dangers to society and humanity as the tech giants race to develop fully autonomous programs.

History reminds us that a virtual monopoly on technological development can never be maintained and upheld for long. In the 1940s, for example, when the United States developed a nuclear bomb under the Manhattan Project, other powers caught up and built their own bomb. However, it took more than two decades for the global community to formalize an agreement to prohibit the proliferation of nuclear weapons, known as the Nuclear Non-Proliferation Treaty (NPT).

The unregulated growth and proliferation of LAWS threatens to unleash a new era of warfare fueled by autonomous platforms, compromising human dignity, civilian protection, and the safety of non-combatants. Henceforth, there is a need for states to find common ground to regulate and formalize an understanding of human control over the use of force. Global values, ethics, and rules of warfare that have guided humanity over the last two thousand years remain imperative for upholding international peace and security.

Fuente: https://www.geopoliticalmonitor.com