El programa SAFRON (Safe and Assured Foundations Robots for Open Environment) que lleva adelante DARPA, la Agencia de Proyectos Avanzados de Defensa de EEUU, convoca a la industria a presentar propuestas, sobre distintas posibilidades de incorporación segura de Inteligencia Artificial (AI) y Machine Learning (ML) en sistemas de uso militar. SAFRON tiene por objetivo buscar alternativas que maximicen la confiabilidad de la IA / ML aplicada a los robots militares. Las propuestas deberían permitir a los expertos, desarrollar programas que incorporen IA y ML de la manera más rápida, flexible, al menor costo y que fundamentalmente, otorguen la mayor seguridad en el accionar de los sistemas autónomos de todo tipo.
ARLINGTON, Va. – U.S. military researchers are asking industry for ways of assuring trust in complex military artificial intelligence (AI) programming, so that military leaders could use sophisticated robotics for crucial jobs without concern for erroneous and dangerous results.
Officials of the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., issued a advanced research concepts opportunity (DARPA-EA-24-01-0) last week for the Safe and Assured Foundation Robots for Open Environments (SAFRON) project.
SAFRON seeks to enhance trust in AI-based robots that use sophisticated foundation models, which can help computer experts develop AI and machine learning for military applications quickly and cost-effectively.
Foundation models are trained on extremely large datasets, and are large deep-learning neural networks that can form the foundation of new AI applications that do not require scientists to develop AI programs from scratch.
Foundation models describe machine learning that is trained on a broad spectrum of generalized and unlabeled data that is capable of performing a wide variety of general tasks such as understanding language, generating text and images, and conversing in natural language.
Foundation models, for example, can enable robots to parse natural-language directions for complex tasks and then execute those tasks in real-world conditions. Although this represents a dramatic break from existing autonomous systems, natural-language direction for open-world autonomy presents a safety and assurance challenge.
Current methods to assure learning-enabled systems are inadequate to address AI foundation models, DARPA researchers say. Formal neural network verifiers to date have been effective only in limited narrow scenarios, but do not scale to large foundation models.
In addition existing training and alignment methodologies are not very robust, and do not consider complex behaviors that AI-enabled robots may be expected to encounter in an unconstrained open-world environment. Even worse, foundation models are known to exhibit errant behavior such as hallucination, and false confidence in reasoning.
Assurances are crucial to deploy foundation models-enabled robots; a robot controlled by a hallucinating foundation models could fail to execute a critical task, researchers point out.
Instead, the SAFRON project seeks to answer the core question: how, and to what extent, can we assure that foundation model-enabled robots will behave only as directed and intended?
SAFRON seeks to explore approaches that will lead to assurances about the behavior of robots that use foundation models, with particular emphasis on robots that receive commands in natural language; operate in unstructured open-world environments; and incorporate the foundation models in closed-loop decision making. Approaches that provide assurances subject to minimal additional supervision from a human operator in real time also are of interest.