{"id":3869,"date":"2019-04-30T12:37:29","date_gmt":"2019-04-30T15:37:29","guid":{"rendered":"https:\/\/www.nachodelatorre.com.ar\/mosconi\/?p=3869"},"modified":"2019-04-30T12:37:29","modified_gmt":"2019-04-30T15:37:29","slug":"armas-autonomas-con-capacidad-letal","status":"publish","type":"post","link":"https:\/\/www.fie.undef.edu.ar\/ceptm\/?p=3869","title":{"rendered":"Armas aut\u00f3nomas con capacidad letal"},"content":{"rendered":"<div><b><u><\/u><\/b>Los dirigentes de las potencias mundiales est\u00e1n convencidos que aquel que lidere\u00a0 el desarrollo de sistemas de armas aut\u00f3nomas, operando en equipo con los combatientes humanos empleando inteligencia artificial (AI), ser\u00e1 el que ejerza el Poder Global. El presente art\u00edculo analiza si existe un adecuado C\u00f3digo de \u00c9tica para el empleo de Armas Aut\u00f3nomas con capacidad letal.<\/div>\n<p><!--more--><\/p>\n<p><img loading=\"lazy\" class=\" alignright\" src=\"https:\/\/thehill.com\/sites\/default\/files\/styles\/thumb_small_article\/public\/soldier_facepaint_04212019getty.jpg?itok=mFwQVMUB\" alt=\"Ethics for the AI-enabled warfighter \u00e2\u0080\u0094 the human 'Warrior-in-the-Design'\" width=\"400\" height=\"225\" \/>Can a victor truly be crowned in the\u00a0<a href=\"https:\/\/www.amazon.com\/AI-Superpowers-China-Silicon-Valley\/dp\/132854639X\" target=\"_blank\" rel=\"noopener noreferrer\">great power competition<\/a>\u00a0for artificial intelligence?<\/p>\n<p>According to Russian President\u00a0<a href=\"https:\/\/www.theverge.com\/2017\/9\/4\/16251226\/russia-ai-putin-rule-the-world\" target=\"_blank\" rel=\"noopener noreferrer\">Vladimir Putin<\/a>, \u201cwhoever becomes the leader in this sphere will become the ruler of the world.\u201d<\/p>\n<p>But the life of a state, much like that of a human being, is always subject to shifts of fortune.<\/p>\n<p>To illustrate, let\u2019s consider this\u00a0<a href=\"http:\/\/classics.mit.edu\/Plutarch\/solon.html\" target=\"_blank\" rel=\"noopener noreferrer\">ancient tale<\/a>. At a lavish banquet King Croesus asked\u00a0<a href=\"https:\/\/www.britannica.com\/biography\/Solon\" target=\"_blank\" rel=\"noopener noreferrer\">Solon of Athens<\/a>\u00a0if he knew anyone more fortunate than Croesus; to which Solon wisely answered: \u201cThe future bears down upon each one of us with all the hazards of the unknown, and we can only count a man happy when the gods have granted him good fortune to the end.\u201d<\/p>\n<p>Thus, to better prepare the U.S. for sustainable leadership in AI innovation and military ethics, I recommend a set of principles to guide human warfighters in employing lethal autonomous weapon systems \u2014 armed robots.<\/p>\n<p><strong><em>Sustainable leadership<\/em><\/strong><\/p>\n<p><a href=\"https:\/\/www.hsdl.org\/?abstract&amp;did=819279\" target=\"_blank\" rel=\"noopener noreferrer\">By 2035<\/a>\u00a0the U.S. expects to have ground forces teaming up with robots. The discussion on how autonomous weapon systems should responsibly be integrated with human military elements, however, is slowly unfolding. As Congress\u00a0<a href=\"https:\/\/www.hsdl.org\/?abstract&amp;did=819279\" target=\"_blank\" rel=\"noopener noreferrer\">begins evaluating<\/a>\u00a0what the Defense Department should do, it must also consider preparing tomorrow\u2019s warfighters for how armed robots will test military ethics.<\/p>\n<p>As a beginning point of reference,\u00a0<a href=\"https:\/\/en.wikipedia.org\/wiki\/Three_Laws_of_Robotics\" target=\"_blank\" rel=\"noopener noreferrer\">Isaac Asimov\u2019s Three Laws of Robotics<\/a>require: (1) a robot must not harm humans; (2) a robot must follow all instructions by humans, except if following those instructions would violate the first law; and (3) a robot must protect itself, so long as its actions do not violate the first or second laws. Unfortunately, these laws are silent on how human ethics apply here. Thus, my research into autonomous weapon systems and ethical theories re-imagines Asimov\u2019s Laws and offers a new code of conduct for servicemembers.<\/p>\n<p><strong><em>What is a code of conduct?<\/em><\/strong><\/p>\n<p>Fundamentally, it is a set of beliefs on how to behave. Each service branch teaches members to follow a code of conduct like the\u00a0<a href=\"https:\/\/www.army.mil\/values\/soldiers.html\" target=\"_blank\" rel=\"noopener noreferrer\">Soldier\u2019s Creed and Warrior Ethos<\/a>, the\u00a0<a href=\"https:\/\/www.airforce.com\/mission\/vision\" target=\"_blank\" rel=\"noopener noreferrer\">Airman\u2019s Creed<\/a>, and the\u00a0<a href=\"https:\/\/www.navy.mil\/navydata\/nav_legacy.asp?id=257\" target=\"_blank\" rel=\"noopener noreferrer\">Sailor\u2019s Creed<\/a>. Reflected across these distinct codes, however, is a shared commitment to a value-system of duty, honor, and integrity, among others.<\/p>\n<p>Drawing inspiration from these concepts and several robotics strategy assessments by the\u00a0<a href=\"https:\/\/crsreports.congress.gov\/product\/pdf\/R\/R45392\" target=\"_blank\" rel=\"noopener noreferrer\">Marine Corps<\/a>\u00a0and\u00a0<a href=\"https:\/\/www.tradoc.army.mil\/Portals\/14\/Documents\/RAS_Strategy.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">Army<\/a>, I offer a guiding vision \u2014 a human Warrior-in-the-Design Code of Conduct.<\/p>\n<p>The\u00a0<a href=\"https:\/\/www.steptoecyberblog.com\/2019\/02\/19\/episode-251-executive-orders-and-alien-abductions\/\" target=\"_blank\" rel=\"noopener noreferrer\">Warrior-in-the-Design<\/a>\u00a0concept embodies both the\u00a0<a href=\"https:\/\/www.esd.whs.mil\/Portals\/54\/Documents\/DD\/issuances\/dodd\/300009p.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">Defense Directive<\/a>that autonomous systems be designed to support the human judgment of commanders and operators in employing lethal force, and Human Rights Watch&#8217;s definition of\u00a0<a href=\"https:\/\/www.hrw.org\/news\/2016\/04\/11\/killer-robots-and-concept-meaningful-human-control\" target=\"_blank\" rel=\"noopener noreferrer\">human-out-of-the-loop weapons<\/a>\u00a0i.e<em>.,\u00a0<\/em>robots that can select targets and apply force without human input or interaction.<\/p>\n<p><strong><em>The Warrior-in-the-Design Code of Conduct for Servicemembers:<\/em><\/strong><\/p>\n<ul>\n<li>\u201cI am the Warrior-in-the-Design;<\/li>\n<li>Every decision to employ force begins with human judgment;<\/li>\n<li>I verify the autonomous weapon systems target selection before authorizing engagement, escalating to fully autonomous capabilities when necessary as a final resort;<\/li>\n<li>I will never forget my duty to responsibly operate these systems for the safety of my comrades and to uphold the law of war;<\/li>\n<li>For I am the Warrior-in-the-Design.\u201d<\/li>\n<\/ul>\n<p>These principles encourage integrating AI and armed robots in ways that enhance \u2014 rather than supplant \u2014 human capability and the\u00a0<a href=\"https:\/\/www.amazon.com\/Killing-Psychological-Cost-Learning-Society\/dp\/0316040932\" target=\"_blank\" rel=\"noopener noreferrer\">warrior psyche in combat<\/a>. Further, it reinforces that humans are the central figures in overseeing, managing, and employing autonomous weapons.<\/p>\n<p><strong><em>International developments<\/em><\/strong><\/p>\n<p>Granted, each country\u2019s approach to developing autonomous weapons will vary. For instance,\u00a0<a href=\"https:\/\/fas.org\/sgp\/crs\/weapons\/R45392.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">Russia\u2019s military<\/a>\u00a0expects \u201clarge unmanned ground vehicles [to do] the actual fighting &#8230; alongside or ahead of the human fighting force.\u201d Based on\u00a0<a href=\"https:\/\/foreignpolicy.com\/2019\/03\/05\/whoever-predicts-the-future-correctly-will-win-the-ai-arms-race-russia-china-united-states-artificial-intelligence-defense\/\" target=\"_blank\" rel=\"noopener noreferrer\">China&#8217;s<\/a>\u00a0New Generation Plan, it aspires to\u00a0<a href=\"https:\/\/www.newamerica.org\/cybersecurity-initiative\/digichina\/blog\/full-translation-chinas-new-generation-artificial-intelligence-development-plan-2017\" target=\"_blank\" rel=\"noopener noreferrer\">lead the world in AI development by 2030<\/a>\u00a0\u2014 including enhanced man-machine coordination and unmanned systems like service robots.<\/p>\n<p>So far, the U.S. has focused on\u00a0<a href=\"https:\/\/fas.org\/sgp\/crs\/weapons\/R45392.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">unmanned ground systems<\/a>\u00a0to support intelligence, surveillance and reconnaissance operations. The Pentagon\u2019s Joint Artificial Intelligence Center is currently testing how AI can support the military in\u00a0<a href=\"https:\/\/foreignpolicy.com\/2019\/02\/13\/no-the-pentagon-is-not-working-on-killer-robots-yet\/amp\/\" target=\"_blank\" rel=\"noopener noreferrer\">fighting fires and predictive maintenance tasks<\/a>. Additionally,\u00a0<span class=\"rollover-people\" data-behavior=\"rolloverpeople\"><a class=\"rollover-people-link\" href=\"https:\/\/thehill.com\/people\/donald-trump\" data-nid=\"261287\">President Trump<\/a><\/span>\u2019s\u00a0<a href=\"https:\/\/www.whitehouse.gov\/presidential-actions\/executive-order-maintaining-american-leadership-artificial-intelligence\/\" target=\"_blank\" rel=\"noopener noreferrer\">Executive Order on Artificial Intelligence<\/a>encourages government agencies to prioritize AI research and development. Adopting the Warrior-in-the-Design Code of Conduct is a helpful first-step to supporting this initiative.<\/p>\n<p><strong><em>How?<\/em><\/strong><\/p>\n<p>It would signal to\u00a0<a href=\"https:\/\/www.washingtonpost.com\/news\/the-switch\/wp\/2018\/06\/01\/google-to-drop-pentagon-ai-contract-after-employees-called-it-the-business-of-war\/?utm_term=.c2cc49c01a8e\" target=\"_blank\" rel=\"noopener noreferrer\">private industry<\/a>\u00a0and international peers that the U.S. is committed to the responsible development of these technologies and to upholding\u00a0<a href=\"https:\/\/theconversation.com\/ban-killer-robots-to-protect-fundamental-moral-and-legal-principles-101427\" target=\"_blank\" rel=\"noopener noreferrer\">international law<\/a>. Some critics object to the idea of \u2018<a href=\"https:\/\/www.hrw.org\/report\/2012\/11\/19\/losing-humanity\/case-against-killer-robots\" target=\"_blank\" rel=\"noopener noreferrer\">killer robots<\/a>\u2019 because they would\u00a0<a href=\"https:\/\/theconversation.com\/losing-control-the-dangers-of-killer-robots-58262\" target=\"_blank\" rel=\"noopener noreferrer\">lack human ethical decision-making capabilities<\/a>\u00a0and may\u00a0<a href=\"https:\/\/www.hrw.org\/sites\/default\/files\/report_pdf\/arms0818_web.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">violate moral and legal principles<\/a>. The Defense Department\u2019s response is two-fold: First, the technology is\u00a0<a href=\"https:\/\/foreignpolicy.com\/2019\/02\/13\/no-the-pentagon-is-not-working-on-killer-robots-yet\/amp\/\" target=\"_blank\" rel=\"noopener noreferrer\">nowhere near the advancement needed<\/a>\u00a0to operate fully autonomous weapons, the ones that could \u2014 hypothetically, at least \u2014 examine potential targets, evaluate how threatening they are, and fire accordingly. Second, such technological capabilities could help\u00a0<a href=\"https:\/\/dod.defense.gov\/News\/News-Releases\/News-Release-View\/Article\/1755388\/new-strategy-outlines-path-forward-for-artificial-intelligence\/\" target=\"_blank\" rel=\"noopener noreferrer\">save the lives of military personnel and civilians<\/a>, by automating tasks that are \u201c<a href=\"https:\/\/wilsonquarterly.com\/quarterly\/winter-2009-robots-at-war\/robots-at-war-the-new-battlefield\/\" target=\"_blank\" rel=\"noopener noreferrer\">dull, dirty or dangerous<\/a>\u201d for humans. Perhaps this creed concept could help bridge the communication divide between\u00a0<a href=\"https:\/\/www.stopkillerrobots.org\/\" target=\"_blank\" rel=\"noopener noreferrer\">groups<\/a>\u00a0that worry such weapons violate human dignity, and servicemembers who critically need automated assistance on the battlefield.<\/p>\n<p>The future of AI bears down upon each of us \u2014 let reason and ethics guide us there.<\/p>\n<p><strong>Fuente:\u00a0<\/strong><em><a href=\"https:\/\/thehill.com\/opinion\/cybersecurity\/439898-ethics-for-the-ai-enabled-warfighter-the-human-warrior-in-the-design\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/thehill.com<\/a><\/em><\/p>\n<div><\/div>\n","protected":false},"excerpt":{"rendered":"<p>Los dirigentes de las potencias mundiales est\u00e1n convencidos que aquel que lidere\u00a0 el desarrollo de sistemas de armas aut\u00f3nomas, operando en equipo con los combatientes&hellip; <\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[18,2,29],"tags":[],"_links":{"self":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/3869"}],"collection":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3869"}],"version-history":[{"count":0,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/3869\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3869"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3869"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3869"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}