{"id":3999,"date":"2019-05-31T15:43:56","date_gmt":"2019-05-31T18:43:56","guid":{"rendered":"https:\/\/www.nachodelatorre.com.ar\/mosconi\/?p=3999"},"modified":"2019-05-31T15:43:56","modified_gmt":"2019-05-31T18:43:56","slug":"los-cientificos-ayudan-a-la-inteligencia-artificial-a-ser-mas-astuta-que-los-hackers","status":"publish","type":"post","link":"https:\/\/www.fie.undef.edu.ar\/ceptm\/?p=3999","title":{"rendered":"Los cient\u00edficos ayudan a la inteligencia artificial a ser m\u00e1s astuta que los hackers"},"content":{"rendered":"<p>Los ingenieros pueden cambiar la forma en que entrenan a la IA. Los m\u00e9todos actuales para asegurar un algoritmo contra ataques son lentos y dif\u00edciles.<!--more--><\/p>\n<p><img class=\" alignright\" src=\"https:\/\/www.sciencemag.org\/sites\/default\/files\/styles\/inline__450w__no_aspect\/public\/sn-robustAI.jpg?itok=Xim9x4z2\" alt=\"\" \/>A hacked message in a streamed song makes Alexa send money to a foreign entity. A self-driving car crashes after a prankster strategically places stickers on a stop sign so the car misinterprets it as a speed limit sign. Fortunately these haven\u2019t happened yet, but hacks like this, sometimes called\u00a0<a href=\"https:\/\/www.sciencemag.org\/news\/2018\/07\/turtle-or-rifle-hackers-easily-fool-ais-seeing-wrong-thing\" target=\"_blank\" rel=\"noopener noreferrer\">adversarial attacks<\/a>, could become commonplace\u2014unless artificial intelligence (AI) finds a way to outsmart them. Now, researchers have found a new way to give AI a defensive edge, they reported here last week at the International Conference on Learning Representations.<\/p>\n<p>The work could not only protect the public. It also helps reveal why AI, notoriously difficult to understand, falls victim to such attacks in the first place, says Zico Kolter, a computer scientist at Carnegie Mellon University, in Pittsburgh, Pennsylvania, who was not involved in the research. Because some AIs are too smart for their own good, spotting patterns in images that humans can\u2019t, they are vulnerable to those patterns and need to be trained with that in mind, the research suggests.<\/p>\n<p id=\"interstitial-container\">To identify this vulnerability, researchers created a special set of training data: images that look to us like one thing, but look to AI like another\u2014a picture of a dog, for example, that, on close examination by a computer, has catlike fur. Then the team mislabeled the pictures\u2014calling the dog picture an image of a cat, for example\u2014and trained an algorithm to learn the labels. Once the AI had learned to see dogs with subtle cat features as cats, they tested it by asking it to recognize fresh, unmodified images. Even though the AI had been trained in this odd way, it could correctly identify actual dogs, cats, and so on nearly half the time. In essence, it had learned to match the subtle features with labels, whatever the obvious features.<\/p>\n<p>The training experiment suggests AIs use two types of features: obvious, macro ones like ears and tails that people recognize, and micro ones that we can only guess at. It further suggests adversarial attacks aren\u2019t just confusing an AI with meaningless tweaks to an image. In those tweaks, the AI is smartly seeing traces of something else. An AI might see a stop sign as a speed limit sign, for example, because something about the stickers actually makes it subtly resemble a speed limit sign in a way that humans are too oblivious to comprehend.<\/p>\n<p>Some in the AI field suspected this was the case, but it\u2019s good to have a research paper showing it, Kolter says. Bo Li, a computer scientist at the University of Illinois in Champaign\u00a0who was not involved in the work, says distinguishing apparent from hidden features is a \u201cuseful and good research direction,\u201d but that \u201cthere is still a long way\u201d to doing so efficiently.<\/p>\n<p>So now that researchers have a better idea of why AI makes such mistakes, can that be used to help them outsmart adversarial attacks? Andrew Ilyas, a computer scientist at the Massachusetts Institute of Technology (MIT) in Cambridge, and one of the paper\u2019s authors, says engineers could change the way they train AI. Current methods of securing an algorithm against attacks are slow and difficult. But if you modify the training data to have only human-obvious features, any algorithm trained on it won\u2019t recognize\u2014and be fooled by\u2014additional, perhaps subtler, features.<\/p>\n<p>And, indeed, when the team trained an algorithm on images without the subtle features,\u00a0<a href=\"https:\/\/arxiv.org\/abs\/1905.02175\" target=\"_blank\" rel=\"noopener noreferrer\">their image recognition software was fooled by adversarial attacks only 50% of the time<\/a>, the researchers reported at the conference and in a preprint paper posted online last week. That compares with a 95% rate of vulnerability when the AI was trained on images with both obvious and subtle patterns.<\/p>\n<p>Overall, the findings suggest an AI\u2019s vulnerabilities lie in its training data, not its programming, says Dimitris Tsipras of MIT, a co-author. According to Kolter, \u201cOne of the things this paper does really nicely is it drives that point home with very clear examples\u201d\u2014like the demonstration that apparently mislabeled training data can still make for successful training\u2014\u201cthat make this connection very visceral.\u201d<\/p>\n<p><strong>Fuente:<\/strong>\u00a0<em><a href=\"https:\/\/www.sciencemag.org\/news\/2019\/05\/scientists-help-artificial-intelligence-outsmart-hackers\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.sciencemag.org<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Los ingenieros pueden cambiar la forma en que entrenan a la IA. Los m\u00e9todos actuales para asegurar un algoritmo contra ataques son lentos y dif\u00edciles.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[23,29],"tags":[],"_links":{"self":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/3999"}],"collection":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3999"}],"version-history":[{"count":0,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/3999\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3999"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3999"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3999"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}