{"id":1129,"date":"2016-06-21T13:21:37","date_gmt":"2016-06-21T16:21:37","guid":{"rendered":"https:\/\/www.nachodelatorre.com.ar\/mosconi\/?p=1129"},"modified":"2016-06-21T13:21:37","modified_gmt":"2016-06-21T16:21:37","slug":"sistemas-de-armas-robots-militares-son-peligrosos","status":"publish","type":"post","link":"https:\/\/www.fie.undef.edu.ar\/ceptm\/?p=1129","title":{"rendered":"Sistemas de armas, robots militares, \u00bfson peligrosos?"},"content":{"rendered":"<div>El debate por el uso de inteligencia artificial en las armas , es mas complejo de lo que parece &#8230;<\/div>\n<div>Recientemente se firm\u00f3 una carta abierta pidiendo la prohibici\u00f3n de las armas letales controladas por las m\u00e1quinas de inteligencia artificial, \u00a0reflejan la preocupaci\u00f3n de cient\u00edficos y \u00a0tecn\u00f3logos, \u00a0un r\u00e1pido progreso en la inteligencia artificial podr\u00eda ser aprovechada para hacer m\u00e1quinas letales m\u00e1s eficientes y menos responsables, tanto en el campo de combate como y fuera de el .<\/div>\n<p><!--more--><\/p>\n<p>An open letter calling for a ban on lethal weapons controlled by artificially intelligent machines was signed last week by thousands of scientists and technologists, reflecting growing concern that swift progress in artificial intelligence could be harnessed to make killing machines more efficient, and less accountable, both on the battlefield and off. But experts are more divided on the issue of robot killing machines than you might expect.<\/p>\n<p>The letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, was signed by many leading AI researchers as well as prominent scientists and entrepreneurs including Elon Musk, Stephen Hawking, and Steve Wozniak. The letter states:<\/p>\n<p>\u201cArtificial Intelligence (AI) technology has reached a point where the deployment of such systems is\u2014practically if not legally\u2014feasible within years not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.\u201d<\/p>\n<p>Rapid advances have indeed been made in artificial intelligence in recent years, especially within the field of machine learning, which involves teaching computers to recognize often complex or subtle patterns in large quantities of data. And this is leading to ethical questions about real-world applications of the technology (see \u201cHow to Make Self-Driving Cars Make Ethical Decisions\u201d).<\/p>\n<p>Meanwhile, military technology has advanced to allow actions to be taken remotely, for example using drone aircraft or bomb disposal robots, raising the prospect that those actions could be automated.<\/p>\n<p>The issue of automating lethal weapons has been a concern for scientists as well as military and policy experts for some time. In 2012, the U.S. Department of Defense issued a directive banning the development and use of \u201cautonomous and semi-autonomous\u201d weapons for 10 years. Earlier this year the United Nations held a meeting to discuss the issue of lethal automated weapons, and the possibility of such a ban.<\/p>\n<p>But while military drones or robots could well become more automated, some say the idea of fully independent machines capable carrying out lethal missions without human assistance is more fanciful. With many fundamental challenges still remaining in the field of artificial intelligence, however, it\u2019s far from clear when the technology needed for fully autonomous weapons might actually arrive.<\/p>\n<p>\u201cWe\u2019re pushing new frontiers in artificial intelligence,\u201d says Patrick Lin, a professor of philosophy at California Polytechnic State University. \u201cAnd a lot of people are rightly skeptical that it would ever advance to the point where it has anything called full autonomy. No one is really an expert on predicting the future.\u201d<\/p>\n<p>Lin, who gave evidence at the recent U.N. meeting, adds that the letter does not touch on the complex ethical debate behind the use of automation in weapons systems. \u201cThe letter is useful in raising awareness,\u201d he says, \u201cbut it isn\u2019t so much calling for debate; it\u2019s trying to end the debate, saying \u2018We\u2019ve figured it out and you all need to go along.\u2019\u201d<\/p>\n<p>Stuart Russell, a leading AI researcher and a professor at the University of California, Berkeley, dismisses this idea. \u201cIt\u2019s simply not true that there has been no debate,\u201d he says. \u201cBut it is true that the AI and robotics communities have been mostly blissfully ignorant of this issue, maybe because their professional societies have ignored it.\u201d<\/p>\n<p>One issue of debate, which the letter does acknowledge, is that automated weapons could conceivably help reduce unwanted casualties in some situations, since they would be less prone to error, fatigue, or emotion than human combatants.<\/p>\n<p>Those behind the letter have little time for this argument, however.<\/p>\n<p>Max Tegmark, an MIT physicist and founder member of the Future of Life Institute, which co\u00f6rdinated the letter signing, says the idea of ethical automated weapons is a red herring. \u201cI think it\u2019s rather irrelevant, frankly,\u201d he says. \u201cIt\u2019s missing the big point about what is this going to lead to if one starts this AI arms race. If you make the assumption that only the U.S. is going to build these weapons, and the number of conflicts will stay exactly the same, then it would be relevant.\u201d<\/p>\n<p>The Future of Life Institute has issued a more general warning about the long-term risks posed by unfettered AI, cautioning that it could pose serious dangers in the future.<\/p>\n<p>\u201cThis is quite a different issue,\u201d Russell says. \u201cAlthough there is a connection, in that if one is worried about losing control over AI systems as they become smarter, maybe it\u2019s not a good idea to turn over our defense systems to them.\u201d<\/p>\n<p>While many AI experts seem to share this broad concern, some see it as a little misplaced. For example, Gary Marcus, a cognitive scientist and artificial intelligence researcher at New York University, has argued that computers do not need to become artificially intelligent in order to pose many other serious risks, to financial markets or air-traffic systems, for example.<\/p>\n<p>Lin says that while the concept of unchecked killer robots is obviously worrying, the issue of automated weapons deserves a more nuanced discussion. \u201cEmotionally, it\u2019s a pretty straightforward case,\u201d says Lin. \u201cIntellectually I think they need to do more work.\u201d<\/p>\n<p><strong>Fuente:<\/strong> <em><a href=\"https:\/\/www.technologyreview.com\/s\/539876\/military-robots-armed-but-how-dangerous\/\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.technologyreview.com<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>El debate por el uso de inteligencia artificial en las armas , es mas complejo de lo que parece &#8230; Recientemente se firm\u00f3 una carta&hellip; <\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[18,29],"tags":[],"_links":{"self":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/1129"}],"collection":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1129"}],"version-history":[{"count":0,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/1129\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1129"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1129"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1129"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}