Riesgos operativos de la IA en ataques biológicos

El rápido avance de la inteligencia artificial (IA) tiene implicaciones de gran alcance en múltiples ámbitos, incluida la preocupación por el posible desarrollo de armas biológicas. Esta posible aplicación de la IA plantea preocupaciones particulares porque es accesible a actores e individuos no estatales.


The rapid advancement of artificial intelligence (AI) has far-reaching implications across multiple domains, including concern regarding the potential development of biological weapons. This potential application of AI raises particular concerns because it is accessible to nonstate actors and individuals. The speed at which AI technologies are evolving often surpasses the capacity of government regulatory oversight, leading to a potential gap in existing policies and regulations.

In this report, the authors share final results of a study of the potential risks of using large language models (LLMs) in the context of biological weapon attacks. They conducted an expert exercise in which teams of researchers role-playing as malign nonstate actors were assigned to realistic scenarios and tasked with planning a biological attack; some teams had access to an LLM along with the internet, and others were provided only access to the internet. The authors sought to identify potential risks posed by LLM misuse, generate policy insights to mitigate any risks, and contribute to responsible LLM development. The findings indicate that using the existing generation of LLMs did not measurably change the operational risk of such an attack.

Key Findings

  • This research involving multiple LLMs indicates that biological weapon attack planning currently lies beyond the capability frontier of LLMs as assistive tools. The authors found no statistically significant difference in the viability of plans generated with or without LLM assistance.
  • This research did not measure the distance between the existing LLM capability frontier and the knowledge needed for biological weapon attack planning. Given the rapid evolution of AI, it is prudent to monitor future developments in LLM technology and the potential risks associated with its application to biological weapon attack planning.
  • Although the authors identified what they term unfortunate outputs from LLMs (in the form of problematic responses to prompts), these outputs generally mirror information readily available on the internet, suggesting that LLMs do not substantially increase the risks associated with biological weapon attack planning.
  • To enhance possible future research, the authors would aim to increase the sensitivity of these tests by expanding the number of LLMs tested, involving more researchers, and removing unhelpful sources of variability in the testing process. Those efforts will help ensure a more accurate assessment of potential risks and offer a proactive way to manage the evolving measure-countermeasure dynamic.

Descargar archivo pdfFuente: https://www.rand.org