{"id":13583,"date":"2023-11-16T08:09:33","date_gmt":"2023-11-16T11:09:33","guid":{"rendered":"https:\/\/www.fie.undef.edu.ar\/ceptm\/?p=13583"},"modified":"2023-11-16T08:09:33","modified_gmt":"2023-11-16T11:09:33","slug":"eua-y-china-inician-conversaciones-sobre-riesgos-y-seguridad-en-el-ambito-de-inteligencia-artificial","status":"publish","type":"post","link":"https:\/\/www.fie.undef.edu.ar\/ceptm\/?p=13583","title":{"rendered":"EUA y China inician conversaciones sobre riesgos y seguridad en el \u00e1mbito de inteligencia artificial"},"content":{"rendered":"<p>Los presidentes de EUA (Joe Biden) y de China (Xi Jimping) se reunieron el 15nov23 +para establecer acuerdos bilaterales sobre varios aspectos de inter\u00e9s de ambas naciones, siendo el tema + m\u00e1s relevante y promocionado de ellos: \u201cRiesgos y Seguridad en el \u00e1mbito de la IA\u201d. La futura agenda acordada incluye la reuni\u00f3n de expertos para discutir los riesgos y medidas de seguridad asociadas al desarrollo y empleo de IA en todas las \u00e1reas. El objetivo es \u201c<em>dar pasos en la direcci\u00f3n correcta para determinar los usos aceptables y beneficiosos, as\u00ed como aquellos que son de alto riesgo para la humanidad<\/em>\u201d. Tal vez uno de los objetivos m\u00e1s ambiciosos es que ambos pa\u00edses logren llegar a compromisos de renunciar al empleo de IA en los Sistemas de Comando y Control de Armamento Nuclear\u201d.<\/p>\n<hr \/>\n<p>WASHINGTON \u2014\u00a0Following a highly-publicized meeting between US President Joe Biden and Chinese President Xi Jjinping, Biden announced three key points of agreement. While much of the focus is going to be on the two sides restarting mil-to-mil communications and counternarcotics cooperation (specifically against fentanyl), the third initiative stood out as new item on the US-China agenda: artificial intelligence.<\/p>\n<p>\u201cThirdly,\u201d the president said at a\u00a0<a href=\"https:\/\/www.youtube.com\/watch?v=l6fk_Su_Myk\" target=\"_blank\" rel=\"noopener\">press conference<\/a>\u00a0after the summit,\u00a0\u201cwe\u2019re gonna get our experts together to discuss risk and safety issues associated with artificial intelligence. As many of you [know] who travel with me around the world almost everywhere I go, every major leader wants to talk about the impact of artificial intelligence. These are tangible steps in the right direction to determine what\u2019s useful and what\u2019s not useful, but dangerous and what\u2019s acceptable.\u201d<\/p>\n<p>With the press conference focused on fentanyl, Taiwan, and Gaza, Biden didn\u2019t elaborate on the AI plan. But the administration has not only made AI a signature issue with\u00a0<a href=\"https:\/\/breakingdefense.com\/2023\/10\/white-house-ai-exec-order-raises-questions-on-future-of-dod-innovation\/\" target=\"_blank\" rel=\"noopener\">a sweeping executive order<\/a>\u00a0on the subject, it\u2019s also been pushing hard for global norms on military use of AI in particular.<\/p>\n<p>What\u2019s more,\u00a0the Chinese have been showing signs they are receptive, particularly when it comes to renouncing AI command-and-control systems for nuclear weapons. While the tie between AI and nuclear weapons was not expressly outlined by either Biden\u2019s comments or a readout from the White House, experts told Breaking Defense ahead of the summit that it could prove to be a key point of agreement between Washington and the country that the Pentagon has\u00a0termed America\u2019s \u201cpacing\u201d challenge.<\/p>\n<p>Given ongoing tensions, \u201cany agreement I think that we\u2019re going to reach with the Chinese really is gonna be low-hanging fruit,\u201d said\u00a0<a href=\"https:\/\/www.scsp.ai\/about\/who-we-are\/\" target=\"_blank\" rel=\"noopener\">Joe Wang<\/a>, a former State and NSC staffer now with the Special Competitive Studies Project. What experts call \u201cnuclear C2\u201d is an ideal candidate, he and other experts told Breaking Defense in interviews ahead of Biden\u2019s announcement. \u201cNobody wants to see AI controlled nuclear weapons, right? Like, even the craziest dictator can probably agree.\u201d<\/p>\n<p>\u201cChina has signaled interest in joining discussions on setting rules and norms for AI, and we should welcome that,\u201d said\u00a0<a href=\"https:\/\/www.gmfus.org\/find-experts\/bonnie-s-glaser\" target=\"_blank\" rel=\"noopener\">Bonnie Glaser<\/a>, head of the Indo-Pacific program at the German Marshall Fund. And that interest is reciprocated, she told Breaking Defense before the summit: \u201cThe White House is interested in engaging China on limiting the role of AI in command and control of nuclear weapons.\u201d<\/p>\n<p><strong>Bans Or Norms?<\/strong><\/p>\n<p>Hopes for some kind of joint statement had been high since Saturday, when the South China Morning Post, citing unidentified sources,\u00a0<a href=\"https:\/\/www.scmp.com\/news\/china\/military\/article\/3241177\/biden-xi-set-pledge-ban-ai-autonomous-weapons-drones-nuclear-warhead-control-sources\" target=\"_blank\" rel=\"noopener\">declared<\/a>\u00a0that \u201cPresidents Joe Biden and Xi Jinping are poised to pledge a ban on the use of artificial intelligence in autonomous weaponry, such as drones, and in the control and deployment of nuclear warheads.\u201d (While the SCMP is a venerable independent paper in Hong Kong, it\u2019s become increasingly supportive of the Beijing regime.)<\/p>\n<p>The word \u201cban\u201d\u00a0<a href=\"https:\/\/twitter.com\/ArmsControlWonk\/status\/1723765021799592116\" target=\"_blank\" rel=\"noopener\">raised experts\u2019 eyebrows<\/a>, because there\u2019s no sign that either China or the US would accept a binding restriction on their freedom of action in AI. (Indeed, US law arguably prevents the President from making such a commitment without Congress).\u00a0<a href=\"https:\/\/www.ft.com\/content\/908acb66-7e74-4f22-bfae-c842a44b668a\" target=\"_blank\" rel=\"noopener\">Other reports<\/a>\u00a0suggested simply that \u201cChina is seeking an expanded dialogue on artificial intelligence,\u201d while the US Ambassador to APEC, Matt Murray, said in a pre-summit press briefing he didn\u2019t expect an \u201cagreement\u201d on AI.<\/p>\n<p>\u201cThe two sides may not be there yet in terms of reaching a formal agreement on AI,\u201d said\u00a0<a href=\"https:\/\/carnegieendowment.org\/experts\/989\" target=\"_blank\" rel=\"noopener\">Tong Zhao<\/a>, a scholar at the Carnegie Endowment, in an email to Breaking Defense ahead of Biden\u2019s press conference.<\/p>\n<p>This isn\u2019t just a US-China question. Over the past nine months, Washington has been building momentum towards voluntary international norms on the military use of AI \u2014 not just autonomous weapons like drones, but also applications ranging from intelligence analysis algorithms to logistics software. This approach tries to fend off calls by many\u00a0<a href=\"https:\/\/www.stopkillerrobots.org\/\" target=\"_blank\" rel=\"noopener\">peace activists<\/a>\u00a0and\u00a0<a href=\"https:\/\/conferenciaawscostarica2023.com\/communique\/?lang=en\" target=\"_blank\" rel=\"noopener\">nonaligned nations<\/a>\u00a0for a binding ban on \u201ckiller robots,\u201d instead leaving room for the US and its allies to explore \u201cresponsible\u201d use of a widely applicable and rapidly evolving technology.<\/p>\n<p>The American one-two punch came in February. Early that month, the Pentagon rolled out an\u00a0<a href=\"https:\/\/breakingdefense.com\/2023\/02\/dods-clarified-ai-policy-flashes-green-light-for-robotic-weapons-experts\/\" target=\"_blank\" rel=\"noopener\">extensive overhaul<\/a>\u00a0of its policy on military AI and autonomous systems. The next week in the Hague, the State Department\u2019s ambassador-at-large for arms control, Bonnie Jenkins,\u00a0<a href=\"https:\/\/www.state.gov\/keynote-remarks-by-u-s-jenkins-t-to-the-summit-on-responsible-artificial-intelligence-in-the-military-domain-reaim-ministerial-segment\/\" target=\"_blank\" rel=\"noopener\">unveiled<\/a>\u00a0a \u201cPolitical Declaration on Responsible Military Use of Artificial Intelligence and Autonomy\u201d [<a href=\"https:\/\/www.state.gov\/wp-content\/uploads\/2023\/10\/Latest-Version-Political-Declaration-on-Responsible-Military-Use-of-AI-and-Autonomy.pdf\" target=\"_blank\" rel=\"noopener\">PDF<\/a>] that generalized that US approach for international adoption. Since February,\u00a0<a href=\"https:\/\/www.state.gov\/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy\/\" target=\"_blank\" rel=\"noopener\">45 other countries<\/a>\u00a0have joined the US in endorsing the Political Declaration, from core allies like Australia, Britain, France, Germany, and South Korea to geopolitical problem children like Hungary, Libya, and Turkey.<\/p>\n<p>China, unsurprisingly, has not signed on to the US-led approach. \u201cIts diplomatic strategy is still focused on rivaling and counterbalancing US efforts to set future AI governance standards, especially in the military sphere,\u201d said Zhao. \u201cIn managing new military technologies, China frequently resists endorsing \u2018responsible\u2019 practices, contending that \u2018responsibility\u2019 is a politically charged concept lacking objective clarity.\u201d<\/p>\n<p>In diplomacy, however, a degree of ambiguity can be a feature, not a bug. \u201cThe Political Declaration \u2026 it\u2019s not binding and so I think it gives us some wiggle room,\u201d said Wang.<\/p>\n<p>However, that kind of \u201cwiggle room\u201d \u2014 which\u00a0<a href=\"https:\/\/breakingdefense.com\/2023\/02\/dods-clarified-ai-policy-flashes-green-light-for-robotic-weapons-experts\/\" target=\"_blank\" rel=\"noopener\">allows for a wide range of automated weapons<\/a>\u00a0\u2014 is exactly what activists are seeking a binding ban to prevent.<\/p>\n<p>\u201cWe obviously would like to see the US now moving towards clear and strong support for legal instruments\u201d restricting \u201clethal autonomous weapons systems,\u201d\u00a0said\u00a0<a href=\"https:\/\/www.stopkillerrobots.org\/our-team\/\" target=\"_blank\" rel=\"noopener\">Catherine Connolly<\/a>, a lead researcher at the international activist group Stop Killer Robots, in an interview with Breaking Defense. \u201cWe don\u2019t think that guidelines and political declarations are enough, and the majority of states don\u2019t think they\u2019re enough either.\u201d<\/p>\n<p><strong>Fear of \u2018Killer Robots\u2019<\/strong><\/p>\n<p>Efforts towards a new international law have been stymied by a decade of deadlock in Geneva, where a Group of Government Experts convened by the UN consistently has fallen short of consensus \u2014 which the Geneva process requires, giving every state a veto.<\/p>\n<p>So the anti-AI-arms movement went over Geneva\u2019s head to New York,\u00a0<a href=\"https:\/\/www.stopkillerrobots.org\/news\/unga-resolution-on-autonomous-weapons-systems-gives-states-historic-opportunity-to-voteagainstthemachine\/\" target=\"_blank\" rel=\"noopener\">proposing<\/a>\u00a0a draft resolution to the UN General Assembly. Instead of calling for an immediate ban \u2014 which would have been certain to fail \u2014 the resolution [<a href=\"https:\/\/documents-dds-ny.un.org\/doc\/UNDOC\/LTD\/N23\/302\/66\/PDF\/N2330266.pdf\" target=\"_blank\" rel=\"noopener\">PDF<\/a>], introduced by Austria, simply \u201crequests the Secretary-General to seek the views of Member States,\u201d industry, academia, and non-government organizations, submit a report, and officially put the issue on the UN agenda.<\/p>\n<p>The resolution passed by a vote of\u00a0<a href=\"https:\/\/press.un.org\/en\/2023\/gadis3731.doc.htm\" target=\"_blank\" rel=\"noopener\">164 to five<\/a>, with the US in favor and Russia opposing. Among the eight abstentions: China.<\/p>\n<p>\u201cIt\u2019s great that the US has joined the large majority of states in voting yes,\u201d Connolly said. \u201cIt\u2019s a little disappointing that China abstained, given that they have previously noted their support for a legal instrument, [but] there were some parts of the resolution that they didn\u2019t agree with in terms of characteristics and definitions.\u201d<\/p>\n<p>Beijing, in fact, tends to use a\u00a0<a href=\"https:\/\/crsreports.congress.gov\/product\/details?prodcode=IF11294\" target=\"_blank\" rel=\"noopener\">uniquely narrow definition<\/a>\u00a0of \u201cautonomous weapon,\u201d one that only counts systems that, once unleashed, \u201cdo not have any human oversight and cannot be terminated\u201d (i.e. shut off). That\u2019s allowed China to claim it backs a ban while actually excluding the vast majority of autonomous\u00a0systems any military might seek to build.<\/p>\n<p>\u201cChina seems hesitant to enhance the United Nations General Assembly\u2019s role in regulating military AI,\u201d Zhao told Breaking Defense. It prefers to work through the Group of Government Experts in Geneva where the requirement for consensus gives Beijing \u2014 and Moscow, and Washington, and every other state \u2014 a de facto veto.<\/p>\n<p>The US itself has historically preferred the Geneva process and, even as it endorsed the UN resolution, warned that it \u201cshould not undercut the GGE,\u201d noted CSIS scholar\u00a0<a href=\"https:\/\/www.csis.org\/people\/james-andrew-lewis\" target=\"_blank\" rel=\"noopener\">James Lewis<\/a>. That \u201cis pretty much what the Russians said\u201d when they voted no, he noted in an email to Breaking Defense. \u201cIt\u2019s just we played it better, for once, by going with the crowd.\u201d<\/p>\n<p>\u201cBinding law is not in the cards,\u201d Lewis added, \u201cbut If the US can pull in others like the UK, France, and maybe the EU into a comprehensive effort, there can be progress on norms.\u201d<\/p>\n<p>So far, international discussion of the non-binding Political Declaration has actually led Washington to water it down \u2014 most notably, by removing a passage renouncing AI control of nuclear weapons.<\/p>\n<p>\u201cThe February version included a statement based on a commitment the United States made together with France and the United Kingdom to \u2018maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment,\u2019\u201d\u00a0<a href=\"https:\/\/www.state.gov\/under-secretary-jenkins-remarks-at-the-launch-event-for-the-political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy\/\" target=\"_blank\" rel=\"noopener\">Jenkins said on Monday<\/a>. \u201cIn our consultations with others, it became clear that, while many welcomed this statement of assurance, the nexus between AI and nuclear weapons is an area that requires much more discussion.\u00a0 We removed this statement so that it did not become a stumbling block for States to endorse.\u201d<\/p>\n<p>That such a provision proved too much for some countries, considering\u00a0the seminal sci-fi nightmare scenarios made infamous in the 1980s by Terminator and Wargames, shows how hard it is to get any consensus on the issue. An agreement to retain human control over nuclear weapons seems like \u201cthe lowest-hanging fruit,\u201d said Wang.<\/p>\n<p>\u201cThere can be that kind of alignment of interests\u2026when it comes to nuclear weapons,\u201d Wang said. \u201cWhen it comes to more conventional weapons, I think that\u2019s where it gets a little murkier.\u201d<\/p>\n<p><strong>Fuente:<\/strong> <a href=\"https:\/\/breakingdefense.com\/2023\/11\/biden-launches-ai-risk-and-safety-talks-with-china-is-nuclear-c2-a-likely-focus\/\" target=\"_blank\" rel=\"noopener\"><em>https:\/\/breakingdefense.com<\/em><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Los presidentes de EUA (Joe Biden) y de China (Xi Jimping) se reunieron el 15nov23 +para establecer acuerdos bilaterales sobre varios aspectos de inter\u00e9s de&hellip; <\/p>\n","protected":false},"author":1,"featured_media":13584,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[18,2,23],"tags":[],"_links":{"self":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/13583"}],"collection":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=13583"}],"version-history":[{"count":1,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/13583\/revisions"}],"predecessor-version":[{"id":13585,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/13583\/revisions\/13585"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/media\/13584"}],"wp:attachment":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=13583"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=13583"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=13583"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}