{"id":667,"date":"2015-11-10T13:37:50","date_gmt":"2015-11-10T16:37:50","guid":{"rendered":"https:\/\/www.nachodelatorre.com.ar\/mosconi\/?p=667"},"modified":"2015-11-10T13:37:50","modified_gmt":"2015-11-10T16:37:50","slug":"turning-image-into-sound-may-let-blind-vets-see","status":"publish","type":"post","link":"https:\/\/www.fie.undef.edu.ar\/ceptm\/?p=667","title":{"rendered":"Turning Image into Sound May Let Blind Vets \u2018See\u2019"},"content":{"rendered":"<p><img loading=\"lazy\" class=\" alignright\" src=\"https:\/\/ci5.googleusercontent.com\/proxy\/m19Gag2_yxCBbnvqx1-GHn4SVeUol7ZA-JH522LRjI4ffO-mVNyvhwPMIWHRn926g1ME24TDMkL-wN-L4DcFd2026uTkaaUKkSZ8cIizymd8wSVkrqL8FtjnO8k2LqQgx3oeAYToo7s=s0-d-e1-ft#http:\/\/defensetech.org\/wp-content\/uploads\/2015\/11\/glasses-kit-600x400-490x327.jpeg\" alt=\"Researchers at the California Institute of Technology are using commercial technology to decode images into sound -- work that may someday help blinded veterans. (Photo courtesy CalTech) \" width=\"184\" height=\"123\" \/>Researchers at the California Institute of Technology are using commercial technology to decode images into sound \u2014 work that may someday help blinded veterans. (Photo courtesy CalTech)<\/p>\n<p>Researchers at California Institute of Technology studying the ability of the brain to intuitively interpret sound as form could prove beneficial to blinded veterans, enabling them to \u201csee\u201d basic shapes using commercially available technology.<!--more--><\/p>\n<p>Though the concept, and even the technology, enabling the blind to decode sound as images has been around for two decades, it has long been a given that it takes a great deal of training for someone to learn how to recognize what the sounds are showing them, according to researcher Noelle Stiles, who co-authored a paper on the development with principal investigator Professor Shinsuke Shimojo.<\/p>\n<p>\u201cIt is difficult to interpret sound off of a sensory substitution a device. It takes effort. But we found that in some cases it is intuitive in a surprising way,\u201d Stiles told DefenseTech. \u201cIn everyone, there are connections across the senses. If we tap into that cross modality, it makes it easier to interpret some stimuli with this device.\u201d<\/p>\n<p>Stiles recently published a report on the research with colleague and co-author Prof. Shinsuke Shimojo. Their research is not funded by the VA, which didn\u2019t respond to a request for comment. Tom Zampieri of the Blind Veterans Association said he is unaware of the department putting any funds into the work.<br \/>\n<a href=\"http:\/\/defensetech.org\/wp-content\/uploads\/2015\/11\/shimojo_glasses-600x400.jpeg\" target=\"_blank\" rel=\"noopener noreferrer\"><img loading=\"lazy\" class=\"CToWUd alignright\" src=\"https:\/\/ci3.googleusercontent.com\/proxy\/9SGuYiyc26P7xrReVaLfEfuRrFFctEEuSMezsDQ-31NzDG1GSYzwP4tpdmunGJoiH9CdpDNfK2SA0IogYEV3EVESslUvJsblnak5cNa_ObxjtAb5OQwoY8W9taqV1-yvHR9U_dA2UQvZd9oY=s0-d-e1-ft#http:\/\/defensetech.org\/wp-content\/uploads\/2015\/11\/shimojo_glasses-600x400-490x327.jpeg\" alt=\"Professor Shinsuke Shimojo of the California Institute of Technology shows some of the products his team is experimenting with. (Photo courtesy CalTech) \" width=\"270\" height=\"180\" border=\"0\" \/><\/a>Professor Shinsuke Shimojo of the California Institute of Technology shows some of the products his team is experimenting with. (Photo courtesy CalTech)<\/p>\n<p>\u201cI also sit on the [Defense Department\u2019s] Telemedicine &amp; Technology Research Center peer-reviewed DoD Vision Trauma Research Programs funding and know that of the 32 grants [we reviewed] last May, none included this kind of research,\u201d Zampieri said. \u201cSo while I would hope both DoD and VA research would be investing in neurosensory vision research in technology advances, it seems to be mostly done outside of the system.\u201d<\/p>\n<p>The equipment used by Stiles is all off-the-shelf and includes a portable computer that runs Windows software, commercially available \u201cspyglasses\u201d with a miniature camera able to run a live feed to the computer, and headphones.<\/p>\n<p>The live feed allows her to follow the process on screen \u2014 as the software converts pixels into sound, with brightness and location converted to pitch and volume.<\/p>\n<p>The computer software \u2014 available for free download \u2014 translates images to sound. The sounds are sent back to the user through the headphones, she said.<a href=\"http:\/\/defensetech.org\/wp-content\/uploads\/2015\/11\/translation-figure-2.jpeg\" target=\"_blank\" rel=\"noopener noreferrer\"><img loading=\"lazy\" class=\"CToWUd alignright\" src=\"https:\/\/ci5.googleusercontent.com\/proxy\/nRlaNkUHQDSE5IHuC5hfd9b-rjiYHrtSoPbP48mQXgfay4y5PYVG0DWook_iaka0yjaiHHO5VbCWnO6g9oyhHbWrJX_lXcyq6FrqIlZzi6oPYF3Kv3YpozdeDmfXIjgWTg=s0-d-e1-ft#http:\/\/defensetech.org\/wp-content\/uploads\/2015\/11\/translation-figure-2.jpeg\" alt=\"(Image courtesy CalTech)\" width=\"399\" height=\"421\" border=\"0\" \/><\/a><\/p>\n<p>The idea and even the technology to enable some sightless people to \u201csee\u201d using sound has been around for years, but an apparent breakthrough at CalTech indicates that to some extent the brain\u2019s ability to interpret sound as form may be intuitive.<\/p>\n<p>Peter Meijer developed the technology, called The vOICe \u2014 with OIC meaning \u201cOh, I see\u201d \u2014 about 20 years ago to technically preserve mostly pictorial information that could be mentally decoded by the blind through sound.<\/p>\n<p>\u201cBut this design goal does not necessarily yield intuitive results,\u201d he said. \u201cThe main thing emerging from the recent work of Noelle Stiles and Shinsuke Shimojo is that at least for relatively simple visual patterns the mapping is intuitive, matching natural expectations of the brain that exist even before people start training with The vOICe.\u201d<\/p>\n<p>Stiles said there remains much research and work to do before the technology is at a stage and the mapping capabilities of the brain understood enough for a blind person to throw on a pair of glasses and headphones and see complicated images.<\/p>\n<p>But in limited, controlled settings a person could \u2014 without a great deal of training \u2014 learn to recognize items such as key, door or other objects.<\/p>\n<p><strong>Fuente:<\/strong>\u00a0<em><a href=\"http:\/\/defensetech.org\/2015\/11\/05\/turning-image-into-sound-may-let-blind-vets-see\/\" target=\"_blank\" rel=\"noopener noreferrer\">http:\/\/defensetech.org<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Researchers at the California Institute of Technology are using commercial technology to decode images into sound \u2014 work that may someday help blinded veterans. (Photo&hellip; <\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[2,29],"tags":[],"_links":{"self":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/667"}],"collection":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=667"}],"version-history":[{"count":0,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/667\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=667"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=667"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=667"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}