{"id":4010,"date":"2019-06-05T12:10:34","date_gmt":"2019-06-05T15:10:34","guid":{"rendered":"https:\/\/www.nachodelatorre.com.ar\/mosconi\/?p=4010"},"modified":"2019-06-05T12:10:34","modified_gmt":"2019-06-05T15:10:34","slug":"inteligencia-artificial-y-los-principios-de-la-guerra","status":"publish","type":"post","link":"https:\/\/www.fie.undef.edu.ar\/ceptm\/?p=4010","title":{"rendered":"Inteligencia artificial y los principios de la guerra"},"content":{"rendered":"<p style=\"text-align: left;\" align=\"center\"><span lang=\"ES\">El punto central de la IA es pensar en cosas que los humanos no pueden .\u00a0\u00a0Qu\u00e9 sucede cuando\u00a0la Inteligencia Artificial produce una soluci\u00f3n demasiado compleja par ser entendida por los humanos?; Es posible confiar en la computadora para guiar la estrategia?; Se modificar\u00e1\u00a0\u00a0la naturaleza de las decisiones de Comando?\u00a0\u00a0. Sera necesario rever los principios de la conducci\u00f3n.<\/span><!--more--><\/p>\n<p>ARMY WAR COLLEGE: What happens when\u00a0Artificial Intelligence\u00a0produces a war strategy too complex for human brains to understand? Do you\u00a0trust the computer\u00a0to guide your moves, like a traveler blindly following GPS? Or do you reject the plan and, with it, the potential for a strategy so smart it\u2019s literally superhuman?<\/p>\n<p>At this 117-year-old institution dedicated to educating future generals, officers and civilian wrestled\u00a0this week\u00a0with how AI could change the nature of command. (The Army invited me and paid for my travel.).<\/p>\n<p><img loading=\"lazy\" class=\"size-medium wp-image-2271\" src=\"https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2012\/02\/aegis-300x168.jpeg\" sizes=\"(max-width: 300px) 100vw, 300px\" srcset=\"https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2012\/02\/aegis-300x168.jpeg 300w, https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2012\/02\/aegis-230x130.jpeg 230w, https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2012\/02\/aegis.jpeg 640w\" alt=\"\" width=\"300\" height=\"168\" \/><\/p>\n<p>\u201cI\u2019m not talking about\u00a0killer robots,\u201d said Prof. Andrew Hill, the War College\u2019s first-ever\u00a0chair of strategic leadership\u00a0and one of the conference\u2019s lead organizers, at the opening session. The Pentagon wants\u00a0AI to assist human combatants, not replace them. The issue is what happens once humans start taking military advice \u2014 or even orders \u2014 from machines.<\/p>\n<p>The reality is this happens already, to some extent. Every time someone looks at a radar or sonar display, for example, they\u2019ve counting on complicated software to correctly interpret a host of signals no human can see. The\u00a0Aegis\u00a0air and missile defense system on dozens of Navy warships recommends which targets to shoot down with which weapons, and if the human operators are overwhelmed, they can put Aegis on automatic and let it fire the interceptors itself. This mode is meant to stop massive salvos of incoming missiles but it could also shoot down manned aircraft.<\/p>\n<p>Now, Aegis isn\u2019t artificial intelligence. It rigidly executes pre-written algorithms, without machine learning\u2019s ability to improve itself. But it is a long-standing example of the kind of complex automation that is going to become more common as technology improves.<\/p>\n<p><img loading=\"lazy\" class=\"size-medium wp-image-49416\" src=\"https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2018\/10\/ENVG-Rapid-Target-Acquisition-300x169.jpg\" sizes=\"(max-width: 300px) 100vw, 300px\" srcset=\"https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2018\/10\/ENVG-Rapid-Target-Acquisition-300x169.jpg 300w, https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2018\/10\/ENVG-Rapid-Target-Acquisition-768x432.jpg 768w, https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2018\/10\/ENVG-Rapid-Target-Acquisition-1024x576.jpg 1024w, https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2018\/10\/ENVG-Rapid-Target-Acquisition-420x238.jpg 420w, https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2018\/10\/ENVG-Rapid-Target-Acquisition-230x130.jpg 230w\" alt=\"Army photo\" width=\"300\" height=\"169\" \/><\/p>\n<p>While\u00a0the US military won\u2019t let a computer pull the trigger, it is developing target-recognition AI to go on everything from recon drones to\u00a0tank gunsights\u00a0to\u00a0infantry goggles. The armed services are exploring\u00a0predictive maintenancealgorithms that warn mechanics to fix failing components before mere human senses can detect that something\u2019s wrong, cognitive electronic warfare systems that figure out the best way to jam enemy radar, airspace management systems that converge strike fighters, helicopters, and artillery shells on the same target without fratricidal collisions. Future \u201cdecision aids\u201d might automate staff work, turning a commander\u2019s general plan of attack into detailed timetables of which combat units and\u00a0supply convoys\u00a0have to move where, when. And since these systems, unlike Aegis,\u00a0<em>do<\/em>\u00a0use machine learning, they can learn from experience \u2014 which means they continually rewrite their own programming in ways no human mind can follow.<\/p>\n<p>Sure, a well-programmed AI can print a mathematical proof that shows, with impeccable logic, how its proposed solution is the best,\u00a0assuming the information you gave it is correct, one expert told the War College conference. But no human being, not even the AI\u2019s own programmers, possess the math skills, mental focus, or sheer stamina to double-check hundreds of pages of complex equations. \u201cThe proof that there\u2019s nothing better is a huge search tree that\u2019s so big that no human can look through it,\u201d the expert said.<\/p>\n<p>Developing\u00a0explainable AI\u00a0\u2014 artificial intelligence that lays out its reasoning in terms human users can understand \u2014 is a\u00a0high-priority DARPA project. The Intelligence Community has already had some success in developing analytical software that human analysts can comprehend. But that does rule out a lot of cutting-edge machine learning techniques.<\/p>\n<p><strong>Weirder Than Squid<\/strong><\/p>\n<p>Here\u2019s the rub: The whole\u00a0<em>point<\/em>\u00a0of AI is to think of things we humans can\u2019t. Asking AI to restrict its reasoning to what we can understand is a bit like asking Einstein to prove the theory of relativity using only addition, subtraction and a box of crayons. Even if the AI isn\u2019t necessarily\u00a0<em>smarter<\/em>\u00a0than us \u2014 by whatever measurement of \u201csmart\u201d we use \u2014 it\u2019s definitely\u00a0<em>different<\/em>\u00a0from us, whether it thinks with magnetic charges on silicon chips or some quantum effect and we think with neurochemical flows between nerve cells. The brains of (for example) humans, squid, and spiders are all\u00a0more similar to each other\u00a0than either is to an AI.<\/p>\n<p><img loading=\"lazy\" class=\"alignleft size-medium wp-image-36836\" src=\"https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2017\/06\/Screen-Shot-2017-06-02-at-12.22.09-PM-300x162.png\" sizes=\"(max-width: 300px) 100vw, 300px\" srcset=\"https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2017\/06\/Screen-Shot-2017-06-02-at-12.22.09-PM-300x162.png 300w, https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2017\/06\/Screen-Shot-2017-06-02-at-12.22.09-PM-768x414.png 768w, https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2017\/06\/Screen-Shot-2017-06-02-at-12.22.09-PM-1024x552.png 1024w, https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2017\/06\/Screen-Shot-2017-06-02-at-12.22.09-PM.png 1425w\" alt=\"Google image, screencapped from Youtube\" width=\"300\" height=\"162\" \/>Alien minds produce alien solutions. Amazon, for example, organizes its warehouses according to the principle of \u201crandom stow.\u201d While humans would put paper towels on one aisle, ketchup on another, and laptop computers on a third, Amazon\u2019s algorithms instruct the human workers to put incoming deliveries on whatever empty shelf space is nearby: here, towels next to ketchup next to laptops; there, more ketchup, two copies of\u00a0<em>50 Shades of Grey<\/em>, and children\u2019s toys. As each customer\u2019s order comes in, the computer calculates the most efficient route through the warehouse to pick up that specific combination of items. No human mind could keep track of the different items scattered randomly about the shelves, but the computer can, and it tells the humans where to go. Counterintuitive as it is, random stow actually saves Amazon time and money compared to a warehousing scheme a human could understand.<\/p>\n<p><img loading=\"lazy\" class=\"size-medium wp-image-57515\" src=\"https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2019\/04\/CMU-Libratus-BrainsVSAIHalfwayPoint_853x480-min-300x169.jpg\" sizes=\"(max-width: 300px) 100vw, 300px\" srcset=\"https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2019\/04\/CMU-Libratus-BrainsVSAIHalfwayPoint_853x480-min-300x169.jpg 300w, https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2019\/04\/CMU-Libratus-BrainsVSAIHalfwayPoint_853x480-min-768x432.jpg 768w, https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2019\/04\/CMU-Libratus-BrainsVSAIHalfwayPoint_853x480-min-420x238.jpg 420w, https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2019\/04\/CMU-Libratus-BrainsVSAIHalfwayPoint_853x480-min-230x130.jpg 230w, https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2019\/04\/CMU-Libratus-BrainsVSAIHalfwayPoint_853x480-min.jpg 853w\" alt=\"CMU photo\" width=\"300\" height=\"169\" \/><\/p>\n<p>In fact, AI frequently comes up with effective strategies that no human would conceive of and, in many cases, that no human could execute.\u00a0Deep Blue beat Garry Kasparov\u00a0at chess with moves so unexpected he initially accused it of cheating by getting advice from another grandmaster. (No cheating \u2014 it was all the algorithm).\u00a0AlphaGo beat Lee Sedol\u00a0with a move that surprised not only him but every Go master watching.\u00a0Libratus beat poker champions\u00a0not only by out-bluffing them, but by using strategies long decried by poker pros \u2014 such as betting wildly varying amounts from game to game or \u201climping\u201d along with bare-minimum bets \u2014 that humans later tried to imitate but often couldn\u2019t pull off.<\/p>\n<p>If you reject an AI\u2019s plans because you can\u2019t understand them, you\u2019re ruling out a host of potential strategies that, while deeply weird, might work. That means you\u2019re likely to be outmaneuvered by an opponent who\u00a0<em>does<\/em>\u00a0trust his AI and its \u201ccrazy enough to work\u201d ideas.<\/p>\n<p>As one participant put it: At what point do you give up on trying to understand the alien mind of the AI and just \u201chit the I-believe button\u201d?<\/p>\n<p><img loading=\"lazy\" class=\"size-full wp-image-57473\" src=\"https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2019\/04\/Screen-Shot-2019-04-25-at-4.09.29-PM.png\" sizes=\"(max-width: 797px) 100vw, 797px\" srcset=\"https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2019\/04\/Screen-Shot-2019-04-25-at-4.09.29-PM.png 797w, https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2019\/04\/Screen-Shot-2019-04-25-at-4.09.29-PM-300x219.png 300w, https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2019\/04\/Screen-Shot-2019-04-25-at-4.09.29-PM-768x560.png 768w\" alt=\"Department of Defense graphic\" width=\"797\" height=\"581\" \/><\/p>\n<p><strong>The New Principles of War<\/strong><\/p>\n<p>If you\u00a0<em>do<\/em>\u00a0let the AI take the lead, several conference participants argued, you need to redefine or even abandon some of the traditional \u201cprinciples of war\u201d taught in military academies. Now, those principles are really rules of thumb, not a strict checklist for military planners or mathematically provable truths, and different countries use\u00a0different lists. But they do boil down centuries of experience: mass your forces at the decisive point, surprise the enemy when possible, aim for a single and clearly defined objective, keep plans simple to survive miscommunication and the chaos of battle, have a single commander for all forces in the operation, and so on.<\/p>\n<p><img loading=\"lazy\" class=\"size-medium wp-image-30776\" src=\"https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2016\/08\/2753476-300x200.jpg\" sizes=\"(max-width: 300px) 100vw, 300px\" srcset=\"https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2016\/08\/2753476-300x200.jpg 300w, https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2016\/08\/2753476-768x511.jpg 768w, https:\/\/sites.breakingmedia.com\/uploads\/sites\/3\/2016\/08\/2753476-1024x682.jpg 1024w\" alt=\"Army photo\" width=\"300\" height=\"200\" \/><\/p>\n<p>To start with, the principle of simplicity starts to fade if you\u2019re letting your AI make plans too complex for you to comprehend. As long as there are human soldiers on the battlefield, the\u00a0<em>specific<\/em><em>orders<\/em>\u00a0the AI gives them have to be simple enough to understand \u2014 go here, dig in, shoot that \u2014 even if the\u00a0<em>overall<\/em><em>plan<\/em>\u00a0is not. But robotic soldiers, including\u00a0aerial drones\u00a0and\u00a0unmanned warships, can remember and execute complex orders without error, so the more machines that fight, the more simplicity becomes obsolete.<\/p>\n<p>The principle of the objective mutates too, for much the same reason. Getting a group of humans to work together requires a single, clear vision of victory they all can understand. Algorithms, however, optimize complex utility functions. For example, how many enemies can we kill while minimizing friendly casualties\u00a0<em>and<\/em>civilian casualties\u00a0<em>and<\/em>\u00a0collateral damage to infrastructure? If you trust the AI enough, then the human role becomes to input the criteria \u2014 how many American soldiers\u2019 deaths,\u00a0<em>exactly<\/em>, would you accept to save 100 civilian lives? \u2014 and then follow the computer\u2019s plan to get the optimal outcome.<\/p>\n<p>Finally, and perhaps most painfully for military professionals, what becomes of the hallowed principle of unity of command? Even if a single human being has the final authority to approve or disapprove the plans the AI proposes, is that officer really in command if he isn\u2019t capable of understanding those plans? Is the AI in charge? Or the people who set the variables in its utility function? Or the people who programmed it in the first place?<\/p>\n<p>The conference here didn\u2019t come up with anything like a\u00a0final answer. But we\u2019d better at least start asking the right questions before we turn the\u00a0AIs\u00a0on.<\/p>\n<p style=\"text-align: left;\" align=\"center\"><strong>Fuente:<\/strong>\u00a0<span lang=\"ES\"><a href=\"https:\/\/breakingdefense.com\/2019\/04\/how-ai-could-change-the-art-of-war\/\" target=\"_blank\" data-saferedirecturl=\"https:\/\/www.google.com\/url?q=https:\/\/breakingdefense.com\/2019\/04\/how-ai-could-change-the-art-of-war\/&amp;source=gmail&amp;ust=1559832091226000&amp;usg=AFQjCNHwxk7tt6D56ZwlilcyVdOG2ysRzA\" rel=\"noopener noreferrer\">https:\/\/breakingdefense.com<\/a><\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>El punto central de la IA es pensar en cosas que los humanos no pueden .\u00a0\u00a0Qu\u00e9 sucede cuando\u00a0la Inteligencia Artificial produce una soluci\u00f3n demasiado compleja&hellip; <\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[23,29],"tags":[],"_links":{"self":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/4010"}],"collection":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4010"}],"version-history":[{"count":0,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/4010\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4010"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4010"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4010"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}