{"id":2416,"date":"2017-10-18T15:52:10","date_gmt":"2017-10-18T18:52:10","guid":{"rendered":"https:\/\/www.nachodelatorre.com.ar\/mosconi\/?p=2416"},"modified":"2017-10-18T15:52:10","modified_gmt":"2017-10-18T18:52:10","slug":"nuevas-grietas-de-la-teoria-abren-la-caja-negra-de-las-redes-neuronales-profundas","status":"publish","type":"post","link":"https:\/\/www.fie.undef.edu.ar\/ceptm\/?p=2416","title":{"rendered":"Nuevas grietas de la Teor\u00eda abren la caja negra de las Redes Neuronales Profundas"},"content":{"rendered":"<p>Aunque m\u00e1quinas conocidas como &#8220;redes neuronales profundas&#8221; han aprendido a conversar, conducir autos, ganar videojuegos y campeonatos Go, so\u00f1ar, pintar cuadros y ayudar a hacer descubrimientos cient\u00edficos, tambi\u00e9n han confundido a sus creadores humanos, que nunca esperaron que los algoritmos de &#8220;aprendizaje profundo&#8221; trabajaran tan bien.<!--more--><\/p>\n<p><span class=\"lede\" data-reactid=\"248\">EVEN AS MACHINES\u00a0<\/span>known as \u201cdeep neural networks\u201d have learned to converse, drive cars,\u00a0beat video games\u00a0and\u00a0Go champions, dream, paint pictures and help make scientific discoveries, they have also confounded their human creators, who never expected so-called \u201cdeep-learning\u201d algorithms to work so well. No underlying principle has guided the design of these learning systems, other than vague inspiration drawn from the architecture of the brain (and no one really understands how that operates either).<\/p>\n<p>Like a brain, a deep neural network has layers of neurons\u2014artificial ones that are figments of computer memory. When a neuron fires, it sends signals to connected neurons in the layer above. During deep learning, connections in the network are strengthened or weakened as needed to make the system better at sending signals from input data\u2014the pixels of a photo of a dog, for instance\u2014up through the layers to neurons associated with the right high-level concepts, such as \u201cdog.\u201d After a deep neural network has \u201clearned\u201d from thousands of sample dog photos, it can identify dogs in new photos as accurately as people can. The magic leap from special cases to general concepts during learning gives deep neural networks their power, just as it underlies human reasoning, creativity and the other faculties collectively termed \u201cintelligence.\u201d Experts wonder what it is about deep learning that enables generalization\u2014and to what extent brains apprehend reality in the same way.<\/p>\n<figure class=\"image-embed-component\" data-reactid=\"262\">\n<div class=\"component-lazy loaded\" data-component=\"Lazy\" data-reactid=\"263\">\n<div class=\"image-group-component\"><img src=\"https:\/\/media.wired.com\/photos\/59dbddde0cd98134b30dc460\/master\/w_1000,c_limit\/graph-IL.jpg\" sizes=\"(min-width: 1200px) calc(100vw - (100vw - 1132px) - 300px - 50px - (50px * 2) - 150px), (min-width: 900px) calc(100vw - 300px - (50px * 2) - 100px), (min-width: 600px) calc(100vw - (50px * 2) - 100px), calc(100vw - (20px * 2))\" srcset=\"https:\/\/media.wired.com\/photos\/59dbddde0cd98134b30dc460\/master\/w_300,c_limit\/graph-IL.jpg 300w, https:\/\/media.wired.com\/photos\/59dbddde0cd98134b30dc460\/master\/w_400,c_limit\/graph-IL.jpg 400w, https:\/\/media.wired.com\/photos\/59dbddde0cd98134b30dc460\/master\/w_532,c_limit\/graph-IL.jpg 532w, https:\/\/media.wired.com\/photos\/59dbddde0cd98134b30dc460\/master\/w_600,c_limit\/graph-IL.jpg 600w, https:\/\/media.wired.com\/photos\/59dbddde0cd98134b30dc460\/master\/w_700,c_limit\/graph-IL.jpg 700w, https:\/\/media.wired.com\/photos\/59dbddde0cd98134b30dc460\/master\/w_800,c_limit\/graph-IL.jpg 800w, https:\/\/media.wired.com\/photos\/59dbddde0cd98134b30dc460\/master\/w_900,c_limit\/graph-IL.jpg 900w, https:\/\/media.wired.com\/photos\/59dbddde0cd98134b30dc460\/master\/w_1000,c_limit\/graph-IL.jpg 1000w\" alt=\"\" \/><\/div>\n<\/div><figcaption class=\"caption-component\" data-reactid=\"264\">\n<div class=\"caption-component__credit-container\" data-reactid=\"266\"><i class=\" ui-chart icon icon--16\" data-reactid=\"267\"><\/i><cite class=\"caption-component__credit\" data-reactid=\"268\">LUCY READING-IKKANDA\/QUANTA MAGAZINE<\/cite><\/div>\n<\/figcaption><\/figure>\n<p data-reactid=\"269\">Last month, a\u00a0<a href=\"https:\/\/www.youtube.com\/watch?v=bLqJHjXihK8&amp;feature=youtu.be\" target=\"_blank\" data-reactid=\"271\" rel=\"noopener noreferrer\">YouTube video<\/a>\u00a0of a conference talk in Berlin, shared widely among artificial-intelligence researchers, offered a possible answer. In the talk,\u00a0<a href=\"http:\/\/www.cs.huji.ac.il\/~tishby\/\" target=\"_blank\" data-reactid=\"273\" rel=\"noopener noreferrer\">Naftali Tishby<\/a>, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the \u201cinformation bottleneck,\u201d which he and two collaborators\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/physics\/0004057.pdf\" target=\"_blank\" data-reactid=\"275\" rel=\"noopener noreferrer\">first described in purely theoretical terms in 1999<\/a>. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts. Striking new\u00a0<a href=\"https:\/\/arxiv.org\/abs\/1703.00810\" target=\"_blank\" data-reactid=\"277\" rel=\"noopener noreferrer\">computer experiments<\/a>\u00a0by Tishby and his student Ravid Shwartz-Ziv reveal how this squeezing procedure happens during deep learning, at least in the cases they studied.<\/p>\n<p data-reactid=\"279\">Tishby\u2019s findings have the AI community buzzing. \u201cI believe that the information bottleneck idea could be very important in future deep neural network research,\u201d said\u00a0<a href=\"https:\/\/research.google.com\/pubs\/104980.html\" target=\"_blank\" data-reactid=\"281\" rel=\"noopener noreferrer\">Alex Alemi<\/a>\u00a0of Google Research, who has already\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/1612.00410.pdf\" target=\"_blank\" data-reactid=\"283\" rel=\"noopener noreferrer\">developed new approximation methods<\/a>\u00a0for applying an information bottleneck analysis to large deep neural networks. The bottleneck could serve \u201cnot only as a theoretical tool for understanding why our neural networks work as well as they do currently, but also as a tool for constructing new objectives and architectures of networks,\u201d Alemi said.<\/p>\n<p data-reactid=\"285\">Some researchers remain skeptical that the theory fully accounts for the success of deep learning, but\u00a0<a href=\"https:\/\/as.nyu.edu\/content\/nyu-as\/as\/faculty\/kyle-s-cranmer.html\" target=\"_blank\" data-reactid=\"287\" rel=\"noopener noreferrer\">Kyle Cranmer<\/a>, a particle physicist at New York University who uses machine learning to analyze particle collisions at the Large Hadron Collider, said that as a general principle of learning, it \u201csomehow smells right.\u201d<\/p>\n<p data-reactid=\"289\"><a href=\"http:\/\/www.cs.toronto.edu\/~hinton\/\" target=\"_blank\" data-reactid=\"290\" rel=\"noopener noreferrer\">Geoffrey Hinton<\/a>, a pioneer of deep learning who works at Google and the University of Toronto, emailed Tishby after watching his Berlin talk. \u201cIt\u2019s extremely interesting,\u201d Hinton wrote. \u201cI have to listen to it another 10,000 times to really understand it, but it\u2019s very rare nowadays to hear a talk with a really original idea in it that may be the answer to a really major puzzle.\u201d<\/p>\n<p data-reactid=\"296\">According to Tishby, who views the information bottleneck as a fundamental principle behind learning, whether you\u2019re an algorithm, a housefly, a conscious being, or a physics calculation of emergent behavior, that long-awaited answer \u201cis that the most important part of learning is actually forgetting.\u201d<\/p>\n<p data-reactid=\"297\"><strong>The Bottleneck<\/strong><\/p>\n<p data-reactid=\"298\">Tishby began contemplating the information bottleneck around the time that other researchers were first mulling over deep neural networks, though neither concept had been named yet. It was the 1980s, and Tishby was thinking about how good humans are at speech recognition\u2014a major challenge for AI at the time. Tishby realized that the crux of the issue was the question of relevance: What are the most relevant features of a spoken word, and how do we tease these out from the variables that accompany them, such as accents, mumbling and intonation? In general, when we face the sea of data that is reality, which signals do we keep?<\/p>\n<p data-reactid=\"307\">\u201cThis notion of relevant information was mentioned many times in history but never formulated correctly,\u201d Tishby said in an interview last month. \u201cFor many years people thought information theory wasn\u2019t the right way to think about relevance, starting with misconceptions that go all the way to Shannon himself.\u201d<\/p>\n<p data-reactid=\"310\">Claude Shannon, the founder of information theory, in a sense liberated the study of information starting in the 1940s by allowing it to be considered in the abstract\u2014as 1s and 0s with purely mathematical meaning. Shannon took the view that, as Tishby put it, \u201cinformation is not about semantics.\u201d But, Tishby argued, this isn\u2019t true. Using information theory, he realized, \u201cyou can define \u2018relevant\u2019 in a precise sense.\u201d<\/p>\n<p data-reactid=\"311\">Imagine X is a complex data set, like the pixels of a dog photo, and Y is a simpler variable represented by those data, like the word \u201cdog.\u201d You can capture all the \u201crelevant\u201d information in X about Y by compressing X as much as you can without losing the ability to predict Y. In their 1999 paper, Tishby and co-authors\u00a0<a href=\"https:\/\/research.google.com\/pubs\/author1092.html\" target=\"_blank\" data-reactid=\"313\" rel=\"noopener noreferrer\">Fernando Pereira<\/a>, now at Google, and\u00a0<a href=\"https:\/\/www.princeton.edu\/~wbialek\/wbialek.html\" target=\"_blank\" data-reactid=\"315\" rel=\"noopener noreferrer\">William Bialek<\/a>, now at Princeton University, formulated this as a mathematical optimization problem. It was a fundamental idea with no killer application.<\/p>\n<p data-reactid=\"317\">\u201cI\u2019ve been thinking along these lines in various contexts for 30 years,\u201d Tishby said. \u201cMy only luck was that deep neural networks became so important.\u201d<\/p>\n<p data-reactid=\"318\"><strong>Eyeballs on Faces on People on Scenes<\/strong><\/p>\n<p data-reactid=\"319\">Though the concept behind deep neural networks had been kicked around for decades, their performance in tasks like speech and image recognition only took off in the early 2010s, due to improved training regimens and more powerful computer processors. Tishby recognized their potential connection to the information bottleneck principle in 2014 after reading a\u00a0<a href=\"https:\/\/arxiv.org\/abs\/1410.3831\" target=\"_blank\" data-reactid=\"321\" rel=\"noopener noreferrer\">surprising paper<\/a>\u00a0by the physicists\u00a0<a href=\"https:\/\/biophysics.princeton.edu\/people\/david-j-schwab\" target=\"_blank\" data-reactid=\"323\" rel=\"noopener noreferrer\">David Schwab<\/a>\u00a0and\u00a0<a href=\"http:\/\/physics.bu.edu\/~pankajm\/\" target=\"_blank\" data-reactid=\"325\" rel=\"noopener noreferrer\">Pankaj Mehta<\/a>.<\/p>\n<p data-reactid=\"327\">The\u00a0<a href=\"https:\/\/www.quantamagazine.org\/deep-learning-relies-on-renormalization-physicists-find-20141204\/\" target=\"_blank\" data-reactid=\"329\" rel=\"noopener noreferrer\">duo discovered<\/a>\u00a0that a deep-learning algorithm invented by Hinton called the \u201cdeep belief net\u201d works, in a particular case, exactly like renormalization, a technique used in physics to zoom out on a physical system by coarse-graining over its details and calculating its overall state. When Schwab and Mehta applied the deep belief net to a model of a magnet at its \u201ccritical point,\u201d where the system is fractal, or self-similar at every scale, they found that the network automatically used the renormalization-like procedure to discover the model\u2019s state. It was a stunning indication that, as the biophysicist\u00a0<a href=\"http:\/\/www.physics.emory.edu\/home\/people\/faculty\/nemenman-ilya.html\" target=\"_blank\" data-reactid=\"331\" rel=\"noopener noreferrer\">Ilya Nemenman<\/a>\u00a0said at the time, \u201cextracting relevant features in the context of statistical physics and extracting relevant features in the context of deep learning are not just similar words, they are one and the same.\u201d<\/p>\n<p data-reactid=\"333\">The only problem is that, in general, the real world isn\u2019t fractal. \u201cThe natural world is not ears on ears on ears on ears; it\u2019s eyeballs on faces on people on scenes,\u201d Cranmer said. \u201cSo I wouldn\u2019t say [the renormalization procedure] is why deep learning on natural images is working so well.\u201d But Tishby, who at the time was undergoing chemotherapy for pancreatic cancer, realized that both deep learning and the coarse-graining procedure could be encompassed by a broader idea. \u201cThinking about science and about the role of my old ideas was an important part of my healing and recovery,\u201d he said.<\/p>\n<div class=\"inset-left-component fade-in-up inset-left-component--image\" data-reactid=\"336\"><\/div>\n<p data-reactid=\"344\">In 2015, he and his student Noga Zaslavsky\u00a0<a href=\"https:\/\/arxiv.org\/abs\/1503.02406\" target=\"_blank\" data-reactid=\"346\" rel=\"noopener noreferrer\">hypothesized<\/a>\u00a0that deep learning is an information bottleneck procedure that compresses noisy data as much as possible while preserving information about what the data represent. Tishby and Shwartz-Ziv\u2019s new experiments with deep neural networks reveal how the bottleneck procedure actually plays out. In one case, the researchers used small networks that could be trained to label input data with a 1 or 0 (think \u201cdog\u201d or \u201cno dog\u201d) and gave their 282 neural connections random initial strengths. They then tracked what happened as the networks engaged in deep learning with 3,000 sample input data sets.<\/p>\n<p data-reactid=\"348\">The basic algorithm used in the majority of deep-learning procedures to tweak neural connections in response to data is called \u201cstochastic gradient descent\u201d: Each time the training data are fed into the network, a cascade of firing activity sweeps upward through the layers of artificial neurons. When the signal reaches the top layer, the final firing pattern can be compared to the correct label for the image\u20141 or 0, \u201cdog\u201d or \u201cno dog.\u201d Any differences between this firing pattern and the correct pattern are \u201cback-propagated\u201d down the layers, meaning that, like a teacher correcting an exam, the algorithm strengthens or weakens each connection to make the network layer better at producing the correct output signal. Over the course of training, common patterns in the training data become reflected in the strengths of the connections, and the network becomes expert at correctly labeling the data, such as by recognizing a dog, a word, or a 1.<\/p>\n<p data-reactid=\"349\">In their experiments, Tishby and Shwartz-Ziv tracked how much information each layer of a deep neural network retained about the input data and how much information each one retained about the output label. The scientists found that, layer by layer, the networks converged to the information bottleneck theoretical bound: a theoretical limit derived in Tishby, Pereira and Bialek\u2019s original paper that represents the absolute best the system can do at extracting relevant information. At the bound, the network has compressed the input as much as possible without sacrificing the ability to accurately predict its label.<\/p>\n<p data-reactid=\"350\">Tishby and Shwartz-Ziv also made the intriguing discovery that deep learning proceeds in two phases: a short \u201cfitting\u201d phase, during which the network learns to label its training data, and a much longer \u201ccompression\u201d phase, during which it becomes good at generalization, as measured by its performance at labeling new test data.<\/p>\n<figure class=\"image-embed-component\" data-reactid=\"351\">\n<div class=\"component-lazy loaded\" data-component=\"Lazy\" data-reactid=\"352\"><\/div><figcaption class=\"caption-component\" data-reactid=\"353\"><\/figcaption><\/figure>\n<p data-reactid=\"358\">As a deep neural network tweaks its connections by stochastic gradient descent, at first the number of bits it stores about the input data stays roughly constant or increases slightly, as connections adjust to encode patterns in the input and the network gets good at fitting labels to it. Some experts have compared this phase to memorization.<\/p>\n<p data-reactid=\"361\">Then learning switches to the compression phase. The network starts to shed information about the input data, keeping track of only the strongest features\u2014those correlations that are most relevant to the output label. This happens because, in each iteration of stochastic gradient descent, more or less accidental correlations in the training data tell the network to do different things, dialing the strengths of its neural connections up and down in a\u00a0<a href=\"https:\/\/www.quantamagazine.org\/a-unified-theory-of-randomness-20160802\/\" target=\"_blank\" data-reactid=\"363\" rel=\"noopener noreferrer\">random walk<\/a>. This randomization is effectively the same as compressing the system\u2019s representation of the input data. As an example, some photos of dogs might have houses in the background, while others don\u2019t. As a network cycles through these training photos, it might \u201cforget\u201d the correlation between houses and dogs in some photos as other photos counteract it. It\u2019s this forgetting of specifics, Tishby and Shwartz-Ziv argue, that enables the system to form general concepts. Indeed, their experiments revealed that deep neural networks ramp up their generalization performance during the compression phase, becoming better at labeling test data. (A deep neural network trained to recognize dogs in photos might be tested on new photos that may or may not include dogs, for instance.)<\/p>\n<p data-reactid=\"365\">It remains to be seen whether the information bottleneck governs all deep-learning regimes, or whether there are other routes to generalization besides compression. Some AI experts see Tishby\u2019s idea as one of many important theoretical insights about deep learning to have emerged recently.\u00a0<a href=\"http:\/\/www.people.fas.harvard.edu\/~asaxe\/\" target=\"_blank\" data-reactid=\"367\" rel=\"noopener noreferrer\">Andrew Saxe<\/a>, an AI researcher and theoretical neuroscientist at Harvard University, noted that certain very large deep neural networks don\u2019t seem to need a drawn-out compression phase in order to generalize well. Instead, researchers program in something called early stopping, which cuts training short to prevent the network from encoding too many correlations in the first place.<\/p>\n<p data-reactid=\"369\">Tishby argues that the network models analyzed by Saxe and his colleagues differ from standard deep neural network architectures, but that nonetheless, the information bottleneck theoretical bound defines these networks\u2019 generalization performance better than other methods. Questions about whether the bottleneck holds up for larger neural networks are partly addressed by Tishby and Shwartz-Ziv\u2019s most recent experiments, not included in their preliminary paper, in which they train much larger, 330,000-connection-deep neural networks to recognize handwritten digits in the 60,000-image\u00a0<a href=\"http:\/\/yann.lecun.com\/exdb\/mnist\/\" target=\"_blank\" data-reactid=\"371\" rel=\"noopener noreferrer\">Modified National Institute of Standards and Technology database<\/a>, a well-known benchmark for gauging the performance of deep-learning algorithms. The scientists saw the same convergence of the networks to the information bottleneck theoretical bound; they also observed the two distinct phases of deep learning, separated by an even sharper transition than in the smaller networks. \u201cI\u2019m completely convinced now that this is a general phenomenon,\u201d Tishby said.<\/p>\n<h3 data-reactid=\"373\">Humans and Machines<\/h3>\n<p data-reactid=\"374\">The mystery of how brains sift signals from our senses and elevate them to the level of our conscious awareness drove much of the early interest in deep neural networks among AI pioneers, who hoped to reverse-engineer the brain\u2019s learning rules. AI practitioners have since largely abandoned that path in the mad dash for technological progress, instead slapping on bells and whistles that boost performance with little regard for biological plausibility. Still, as their thinking machines achieve ever greater feats\u2014even stoking fears that\u00a0<a href=\"https:\/\/www.quantamagazine.org\/artificial-intelligence-aligned-with-human-values-qa-with-stuart-russell-20150421\/\" target=\"_blank\" data-reactid=\"376\" rel=\"noopener noreferrer\">AI could someday pose an existential threat<\/a>\u2014many researchers hope these explorations will uncover general insights about learning and intelligence.<\/p>\n<p data-reactid=\"381\"><a href=\"http:\/\/cims.nyu.edu\/~brenden\/\" target=\"_blank\" data-reactid=\"382\" rel=\"noopener noreferrer\">Brenden Lake<\/a>, an assistant professor of psychology and data science at New York University who studies similarities and differences in how humans and machines learn, said that Tishby\u2019s findings represent \u201can important step towards opening the black box of neural networks,\u201d but he stressed that the brain represents a much bigger, blacker black box. Our adult brains, which boast several hundred trillion connections between 86 billion neurons, in all likelihood employ a bag of tricks to enhance generalization, going beyond the basic image- and sound-recognition learning procedures that occur during infancy and that may in many ways resemble deep learning.<\/p>\n<p data-reactid=\"384\">For instance, Lake said the fitting and compression phases that Tishby identified don\u2019t seem to have analogues in the way children learn handwritten characters, which he studies. Children don\u2019t need to see thousands of examples of a character and compress their mental representation over an extended period of time before they\u2019re able to recognize other instances of that letter and write it themselves. In fact, they can learn from a single example.\u00a0<a href=\"http:\/\/cims.nyu.edu\/~brenden\/LakeEtAl2015Science.pdf\" target=\"_blank\" data-reactid=\"386\" rel=\"noopener noreferrer\">Lake and his colleagues\u2019 models<\/a>\u00a0suggest the brain may deconstruct the new letter into a series of strokes\u2014previously existing mental constructs\u2014allowing the conception of the letter to be tacked onto an edifice of prior knowledge. \u201cRather than thinking of an image of a letter as a pattern of pixels and learning the concept as mapping those features\u201d as in standard machine-learning algorithms, Lake explained, \u201cinstead I aim to build a simple causal model of the letter,\u201d a shorter path to generalization.<\/p>\n<p data-reactid=\"422\">Meanwhile, both real and artificial neural networks stumble on problems in which every detail matters and minute differences can throw off the whole result. Most people can\u2019t quickly multiply two large numbers in their heads, for instance. \u201cWe have a long class of problems like this, logical problems that are very sensitive to changes in one variable,\u201d Tishby said. \u201cClassifiability, discrete problems, cryptographic problems. I don\u2019t think deep learning will ever help me break cryptographic codes.\u201d<\/p>\n<p data-reactid=\"423\">Generalizing\u2014traversing the information bottleneck, perhaps\u2014means leaving some details behind. This isn\u2019t so good for doing algebra on the fly, but that\u2019s not a brain\u2019s main business. We\u2019re looking for familiar faces in the crowd, order in chaos, salient signals in a noisy world.<\/p>\n<p><strong>Fuente:\u00a0<\/strong><em><a href=\"https:\/\/www.wired.com\/story\/new-theory-deep-learning\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/www.wired.com<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Aunque m\u00e1quinas conocidas como &#8220;redes neuronales profundas&#8221; han aprendido a conversar, conducir autos, ganar videojuegos y campeonatos Go, so\u00f1ar, pintar cuadros y ayudar a hacer&hellip; <\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[23,29],"tags":[],"_links":{"self":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/2416"}],"collection":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2416"}],"version-history":[{"count":0,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/2416\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2416"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2416"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2416"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}