{"id":2849,"date":"2018-04-07T11:43:11","date_gmt":"2018-04-07T14:43:11","guid":{"rendered":"https:\/\/www.nachodelatorre.com.ar\/mosconi\/?p=2849"},"modified":"2018-04-07T11:43:11","modified_gmt":"2018-04-07T14:43:11","slug":"universidades-e-ingenieros-de-software-en-contra-de-los-war-robots","status":"publish","type":"post","link":"https:\/\/www.fie.undef.edu.ar\/ceptm\/?p=2849","title":{"rendered":"Universidades e ingenieros de software en contra de los &#8220;War Robots&#8221;"},"content":{"rendered":"<p style=\"font-weight: 400;\"><u><\/u>Mientras pa\u00edses como Rusia o China invierten grandes recursos\u00a0\u00a0y sin restricciones, en el desarrollo de Inteligencia Artificial (IA) aplicada a Sistemas de Armas letales, en\u00a0\u00a0EUA y algunos de sus aliados militares como Corea del Sur,\u00a0\u00a0los ingenieros de empresas y universitarios que trabajan asociados a proyectos de Defensa,\u00a0\u00a0han presentado una f\u00e9rrea oposici\u00f3n a la aplicaci\u00f3n de estos avances tecnol\u00f3gicos en sistemas letales.\u00a0\u00a0Esto genera debates y\u00a0\u00a0controversias\u00a0\u00a0acerca de la posibilidad\u00a0\u00a0que los sistemas aut\u00f3nomos,\u00a0\u00a0tengan absoluta potestad de decidir sobre la vida de los humanos.<!--more--><\/p>\n<p>WASHINGTON: The debate over the use of\u00a0artificial intelligence in warfare\u00a0is heating up, with\u00a0Google\u00a0employees protesting their company\u2019s Pentagon contracts, South Koreans protesting university cooperation with their military, and international experts gathering next week to debate whether to pursue a treaty limiting\u00a0military AI. While countries like\u00a0Russia\u00a0and\u00a0China\u00a0are investing heavily in\u00a0artificial intelligence without restraints, the US and allied militaries like South Korea face a rising tide of opposition.<\/p>\n<p><strong>Rule of Law<\/strong><\/p>\n<p>The international conclave has the kind of name you only encounter when dealing with the United Nations and related organizations: the\u00a0Convention on Conventional Weapons Group of Governmental Experts on Lethal Autonomous Weapons Systems (CCWGGELAWS?). Those experts meet next week and in August. Note they have a new acronym for armed AI systems: LAWS.<\/p>\n<p>How is all this arcana relevant to the US military? Treaties are the bedrock of international relations, specific agreements that help define the relations between states.\u00a0Idealists \u2014 and those who want to bind their enemy\u2019s conduct \u2014 often believe treaties are the best mechanism for governing what is allowed in warfare.<\/p>\n<p><img loading=\"lazy\" class=\"size-medium wp-image-43811 alignright\" src=\"https:\/\/breakingdefense.com\/wp-content\/uploads\/sites\/3\/2018\/04\/oconnell-mary-ellen-notre-dame-240x300.jpg\" sizes=\"(max-width: 240px) 100vw, 240px\" srcset=\"https:\/\/breakingdefense.com\/wp-content\/uploads\/sites\/3\/2018\/04\/oconnell-mary-ellen-notre-dame-240x300.jpg 240w, https:\/\/breakingdefense.com\/wp-content\/uploads\/sites\/3\/2018\/04\/oconnell-mary-ellen-notre-dame.jpg 320w\" alt=\"Notre Dame photo\" width=\"240\" height=\"300\" \/><\/p>\n<p>Mary Ellen O\u2019Connell, a law professor at Notre Dame, argued with quiet passion for restraints on AI, comparing it to nuclear weapons and other weapons of mass destruction. What happens, she asked at a\u00a0Brookings Institution forum\u00a0today, when AI is mated with nanotechnology or other advanced technologies? How do humans ensure they are the final decision makers? Given all that, she predicts \u201cwe are going to see some kind of limitation on AI\u201d when the governments that belong to the Convention on Conventional Weapons meet in November to consider what the experts have come up with.<\/p>\n<p>To get an idea where many of those experts are coming from,\u00a0take a look at this\u00a02016 report by the International Committee of the Red Cross:<\/p>\n<p><em>\u201cThe development of autonomous weapon systems \u2014 that is, weapons that are capable of independently selecting and attacking targets without human intervention \u2014 raises the prospect of the loss of human control over weapons and the use of force.\u201d<\/em><\/p>\n<p>O\u2019Connell raised this issue, implying that the lack of personal accountability might make AI impermissible under international law.<\/p>\n<p>Former Defense Secretary Ash Carter pledged\u00a0several times that the United States would always keep a human in or\u00a0on the loop\u00a0of any system designed to kill other humans. As far as we know, that is still US policy.<\/p>\n<p><img loading=\"lazy\" class=\"size-full wp-image-43812 alignright\" src=\"https:\/\/breakingdefense.com\/wp-content\/uploads\/sites\/3\/2018\/04\/dunlap2-MG-ret-charles.jpg\" alt=\"Duke University photo\" width=\"200\" height=\"145\" \/><\/p>\n<p>A very different perspective on the issue was offered by retired Air Force Maj. Gen.\u00a0Charlie Dunlap, \u00a0executive director of Duke Law School\u2019s Center on Law, Ethics and National Security and former Deputy Judge Advocate General. He cautioned against trying to ban specific technologies, noting that there\u2019s an international ban on the use of lasers to\u00a0<em>blind<\/em>\u00a0people in combat \u2014 but there is no ban against using a laser to\u00a0<em>incinerate<\/em>\u00a0someone. The better approach is to \u201cstrictly comply with the laws of war, rather than try to ban certain types of technology,\u201d he argued.<\/p>\n<p>As a public service, let\u2019s remind our readers of one of the first efforts to deal with this issue, Isaac Asimov\u2019s \u201cThree Laws of Robotics.\u201d<\/p>\n<ol>\n<li>A robot may not injure a human being or, through inaction, allow a human being to come to harm.<\/li>\n<li>A robot must obey orders given it by human beings except where such orders would conflict with the First Law.<\/li>\n<li>A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.<\/li>\n<\/ol>\n<p>Of course, Asimov later added this, known as the Zeroth Law:\u00a0\u201cA robot may not harm humanity, or, by inaction, allow humanity to come to harm.\u201d If an AI, in the service of a government, is killing enemy humans, it would appear to violate Asimov\u2019s first law. But the actual laws of war, the Geneva Conventions, are clearly defined and do not ban intelligent systems from commanding and using weapons. If the AI is obeying all rules of war and can be destroyed or curtailed should it begin violating those rules, one can argue that an AI is less likely than a human to break down under the stress of combat and violate the rules of war.<\/p>\n<p><img loading=\"lazy\" class=\"size-large wp-image-26324\" src=\"https:\/\/breakingdefense.com\/wp-content\/uploads\/sites\/3\/2016\/02\/Google-Car-painted-side-flagship-1-1024x683.jpg\" sizes=\"(max-width: 640px) 100vw, 640px\" srcset=\"https:\/\/breakingdefense.com\/wp-content\/uploads\/sites\/3\/2016\/02\/Google-Car-painted-side-flagship-1-1024x683.jpg 1024w, https:\/\/breakingdefense.com\/wp-content\/uploads\/sites\/3\/2016\/02\/Google-Car-painted-side-flagship-1-300x200.jpg 300w, https:\/\/breakingdefense.com\/wp-content\/uploads\/sites\/3\/2016\/02\/Google-Car-painted-side-flagship-1-768x512.jpg 768w\" alt=\"Google photo\" width=\"640\" height=\"427\" \/><\/p>\n<p><strong>Google Revolt<\/strong><\/p>\n<p>Meanwhile, thousands of engineers, researchers, and scientists from Seoul to Silicon Valley are in open revolt against the marriage of artificial intelligence technologies and the military, and have targeted Google and a top South Korean research university for projects they have kicked off with their respective militaries.<\/p>\n<p>The issue of the militarization of AI has been simmering for years, but recent, well-publicized advances by the Chinese and Russians have pushed Western military leaders to scramble to keep pace by pumping tens of millions of dollars into collaborations with civilian and academic institutions. The projects, and the headlines they\u2019re generating, have dragged into the open difficult issues that had been simmering for some time over robotics research and the exploding arms race in AI and autonomous technologies.<\/p>\n<p>A group of about 3,100 Google engineers\u00a0signed a petition\u00a0protesting the company\u2019s involvement with Project Maven, the offshoot of the Pentagon\u2019s\u00a0Algorithmic Warfaretaskforce, which uses AI to collect and analyze drone footage much more quickly and thoroughly than a human can to help military commanders.<\/p>\n<p>\u201cWe believe that Google should not be in the business of war,\u201d\u00a0said the letter, addressed to Sundar Pichai, the company\u2019s chief executive, that was first reported by the\u00a0<em>New York Times<\/em>. The letter also demands that the project be cancelled and the company \u201cdraft, publicize, and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.\u201d<\/p>\n<p><img loading=\"lazy\" class=\"size-full wp-image-43819\" src=\"https:\/\/breakingdefense.com\/wp-content\/uploads\/sites\/3\/2018\/04\/Hanwa-armored-vehicles-ROK-Korea-160613_575x387.png\" sizes=\"(max-width: 575px) 100vw, 575px\" srcset=\"https:\/\/breakingdefense.com\/wp-content\/uploads\/sites\/3\/2018\/04\/Hanwa-armored-vehicles-ROK-Korea-160613_575x387.png 575w, https:\/\/breakingdefense.com\/wp-content\/uploads\/sites\/3\/2018\/04\/Hanwa-armored-vehicles-ROK-Korea-160613_575x387-300x202.png 300w\" alt=\"Hanwa Group photo\" width=\"575\" height=\"387\" \/><\/p>\n<p>A second letter emerged Wednesday. This one was aimed at a top South Korean university which had kicked off a research project with the top Korean defense company. The missive,\u00a0signed\u00a0by more than 50 AI researchers and scientists from 30 different countries, lambasted South Korea\u2019s\u00a0KAIST\u00a0university for opening a lab in conjunction with\u00a0Hanwha Systems, South Korea\u2019s leading arms manufacturer.<\/p>\n<p>The lab, dubbed the \u201cResearch Center for the Convergence of National Defense and Artificial Intelligence,\u201d is planned as a forum for academia to partner with the South Korean military to explore how AI can bolster national security. The university\u2019s website said that it\u2019s\u00a0looking to develop\u00a0\u201cAI-based command and decision systems, composite navigation algorithms for mega-scale unmanned undersea vehicles, AI-based smart aircraft training systems, and AI-based smart object tracking and recognition technology.\u201d<\/p>\n<p><img loading=\"lazy\" class=\"size-medium wp-image-23819 alignright\" src=\"https:\/\/breakingdefense.com\/wp-content\/uploads\/sites\/3\/2015\/10\/Scharre-CNAS-Paul-ScharreP_WEB_HIGH-200x300.jpg\" sizes=\"(max-width: 200px) 100vw, 200px\" srcset=\"https:\/\/breakingdefense.com\/wp-content\/uploads\/sites\/3\/2015\/10\/Scharre-CNAS-Paul-ScharreP_WEB_HIGH-200x300.jpg 200w, https:\/\/breakingdefense.com\/wp-content\/uploads\/sites\/3\/2015\/10\/Scharre-CNAS-Paul-ScharreP_WEB_HIGH-682x1024.jpg 682w\" alt=\"\" width=\"200\" height=\"300\" \/><\/p>\n<p>The university\u2019s leaders have said they have no intention of developing autonomous weapons that lack human control, but the protesters said they will not visit or work with the world-renowned institution until it pledges not to build autonomous weapons.<\/p>\n<p>As for the US effort, a Pentagon spokesperson told\u00a0<em>Breaking Defense<\/em>\u00a0that Maven \u201cis fully governed by, and complies with\u201d U.S. law and the laws of armed conflict and is \u201cdesigned to ensure human involvement to the maximum extent possible in the employment of weapon systems.\u201d<\/p>\n<p>\u201cI think it\u2019s good that we\u2019re having a conversation about this,\u201d said\u00a0Paul Scharre, director of the Technology and National Security Program at the Center for a New American Security. As far as Maven goes, he said, \u201cI think this application is benign,\u201d since it mostly uses open-source technologies, but he understands that engineers are concerned about the \u201cslippery slope\u201d of the greater military use of AI.<\/p>\n<p>\u201cResearchers have for decades been able to do their AI work and its applications have been very theoretical,\u201d Scharre said, \u201cbut some of the advances we\u2019ve seen in machine learning have been making this stuff very real, including for military applications.\u201d No wonder, then, that legal scholars and software programmers alike have starting wrestling in earnest with the implications of armed\u00a0AI.<\/p>\n<p style=\"font-weight: 400;\"><strong>Fuente:<\/strong>\u00a0<em><a href=\"https:\/\/breakingdefense.com\/2018\/04\/a-treaty-to-ban-autonomous-intelligence-weapons\/?utm_source=hs_email&amp;utm_medium=email&amp;utm_content=61904032&amp;_hsenc=p2ANqtz-8SegIOVoTUPEfhemB9pBUo9ppO1Oe1S7o5AoX4wGFzZ3VrqZbNmz8YOwysNlP1KKgBaU4mtLzjpQyDk7-kEIvIEe5F0g&amp;_hsmi=61904032\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/breakingdefense.com<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Mientras pa\u00edses como Rusia o China invierten grandes recursos\u00a0\u00a0y sin restricciones, en el desarrollo de Inteligencia Artificial (IA) aplicada a Sistemas de Armas letales, en\u00a0\u00a0EUA&hellip; <\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[18,2,23,29],"tags":[],"_links":{"self":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/2849"}],"collection":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2849"}],"version-history":[{"count":0,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/2849\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2849"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2849"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2849"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}