{"id":15219,"date":"2024-08-02T11:22:38","date_gmt":"2024-08-02T14:22:38","guid":{"rendered":"https:\/\/www.fie.undef.edu.ar\/ceptm\/?p=15219"},"modified":"2024-08-02T11:22:38","modified_gmt":"2024-08-02T14:22:38","slug":"inteligencia-artificial-y-seguridad","status":"publish","type":"post","link":"https:\/\/www.fie.undef.edu.ar\/ceptm\/?p=15219","title":{"rendered":"Inteligencia artificial y seguridad\u00a0"},"content":{"rendered":"<p>Los l\u00edderes de las empresas de inteligencia artificial afirman que se centran en garantizar la seguridad de sus productos. Por ejemplo, han encargado investigaciones, creado equipos de \u201cconfianza y seguridad\u201d e incluso\u00a0han creado nuevas empresas\u00a0para contribuir a lograr estos objetivos. Pero estas afirmaciones se ven socavadas cuando los expertos pintan un panorama familiar de una\u00a0cultura de negligencia y secretismo que oculta pruebas sobre pr\u00e1cticas inseguras.<\/p>\n<hr \/>\n<p>It could save us or it could kill us.<\/p>\n<p>That&#8217;s what many of the top technologists in the world believe about the future of artificial intelligence. This is why companies like\u00a0<a href=\"https:\/\/openai.com\/careers\/\" target=\"_blank\" rel=\"noopener\">OpenAI emphasize<\/a>\u00a0their dedication to seemingly conflicting goals: accelerating technological progress as rapidly\u2014but also as safely\u2014as possible.<\/p>\n<p>It&#8217;s a laudable intention, but not one of these many companies seems to be succeeding.<\/p>\n<p>Take OpenAI, for example. The leading AI company in the world believes the best approach to building beneficial technology is to ensure that its employees are\u00a0<a href=\"https:\/\/openai.com\/careers\/\" target=\"_blank\" rel=\"noopener\">\u201cperfectly aligned\u201d<\/a>\u00a0with the organization&#8217;s mission. That sounds reasonable except what does it mean in practice?<\/p>\n<p>A lot of groupthink\u2014and that is dangerous.<\/p>\n<p>As social animals, it&#8217;s natural for us to form groups or tribes to pursue shared goals. But these groups can grow insular and secretive, distrustful of outsiders and their ideas. Decades of\u00a0<a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/abs\/10.1002\/bdm.3960030402\" target=\"_blank\" rel=\"noopener\">psychological research<\/a>\u00a0have shown how groups can stifle dissent by punishing or even casting out dissenters. In the 1986\u00a0<a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/abs\/10.1002\/bdm.3960020304\" target=\"_blank\" rel=\"noopener\">Challenger space shuttle explosion<\/a>, engineers expressed safety concerns about the rocket boosters in freezing weather. Yet the engineers were overruled by their leadership, who may have felt pressure to avoid delaying the launch.<\/p>\n<blockquote><p>It could save us or it could kill us. That&#8217;s what many of the top technologists in the world believe about the future of artificial intelligence.<\/p><\/blockquote>\n<p>According to a group of AI insiders, something similar is taking place at OpenAI. According to an\u00a0<a href=\"https:\/\/righttowarn.ai\/\" target=\"_blank\" rel=\"noopener\">open letter<\/a>\u00a0signed by nine current and former employees, the company uses hardball tactics to stifle dissent from workers about their technology. One of the researchers who signed the letter described the company as\u00a0<a href=\"https:\/\/www.nytimes.com\/2024\/06\/04\/technology\/openai-culture-whistleblowers.html\" target=\"_blank\" rel=\"noopener\">\u201crecklessly racing\u201d<\/a>\u00a0for dominance in the field.<\/p>\n<p>It&#8217;s not just happening at OpenAI. Earlier this year, an engineer at Microsoft\u00a0<a href=\"https:\/\/apnews.com\/article\/microsoft-whistleblower-copilot-designer-dalle-image-generator-b494180daaeb60fecfcfaead6cb00e13\" target=\"_blank\" rel=\"noopener\">grew concerned<\/a>\u00a0that the company&#8217;s AI tools were generating violent and sexual imagery. He first tried to get the company to pull them off the market but when that didn&#8217;t work, he went public. Then, he said, Microsoft&#8217;s legal team demanded he delete the LinkedIn post. In 2021, former Facebook project manager Frances Haugen\u00a0<a href=\"https:\/\/www.nytimes.com\/2021\/10\/03\/technology\/whistle-blower-facebook-frances-haugen.html\" target=\"_blank\" rel=\"noopener\">revealed internal research<\/a>\u00a0that showed the company knew the algorithms\u2014often referred to as the building blocks of AI\u2014that Instagram used to surface content for young users were exposing teen girls to images that were harmful to their mental health. When asked in\u00a0<a href=\"https:\/\/www.cbsnews.com\/news\/facebook-whistleblower-frances-haugen-misinformation-public-60-minutes-2021-10-03\/\" target=\"_blank\" rel=\"noopener\">an interview with \u201c60 Minutes\u201d<\/a>\u00a0why she spoke out, Haugen responded, \u201cPerson after person after person has tackled this inside of Facebook and ground themselves to the ground.\u201d<\/p>\n<p>Leaders at AI companies claim they have a laser focus on ensuring that their products are safe. They have, for example, commissioned research, set up \u201ctrust and safety\u201d teams, and even\u00a0<a href=\"https:\/\/www.cnn.com\/2024\/06\/20\/tech\/openai-ilya-sutskever-safe-super-intelligence-new-company\/index.html\" target=\"_blank\" rel=\"noopener\">started new companies<\/a>\u00a0to help achieve these aims. But these claims are undercut when insiders paint a familiar picture of a\u00a0<a href=\"https:\/\/www.nytimes.com\/2024\/06\/04\/technology\/openai-culture-whistleblowers.html\" target=\"_blank\" rel=\"noopener\">culture of negligence and secrecy<\/a>\u00a0that\u2014far from prioritizing safety\u2014instead dismisses warnings and hides evidence about unsafe practices, whether to preserve profits, avoid slowing progress, or simply to spare the feelings of leaders.<\/p>\n<p>So what can these companies do differently?<\/p>\n<p>As a first step, AI companies could ban nondisparagement or confidentiality clauses. The OpenAI whistleblowers asked for that in their open letter and the company says it has already\u00a0<a href=\"https:\/\/x.com\/sama\/status\/1791936857594581428\" target=\"_blank\" rel=\"noopener\">taken such steps<\/a>. But removing explicit threats of punishment isn&#8217;t enough if an insular workplace culture continues to implicitly discourage concerns that might slow progress.<\/p>\n<p>Rather than simply allowing dissent, tech companies could encourage it, putting more options on the table. This could involve, say, beefing up the \u201cbug bounty\u201d programs that tech companies already use to reward employees and customers who identify flaws in their software. Companies could embed a\u00a0<a href=\"https:\/\/link.springer.com\/article\/10.1007\/s42001-020-00083-8\" target=\"_blank\" rel=\"noopener\">\u201cdevil&#8217;s advocate\u201d role<\/a>\u00a0inside software or policy teams that would be charged with opposing consensus positions.<\/p>\n<p>AI companies might also learn from how other highly skilled, mission-focused teams avoid groupthink. Military special operations forces\u00a0<a href=\"https:\/\/www.rand.org\/pubs\/research_reports\/RR1058.html\" target=\"_blank\" rel=\"noopener\">prize group cohesion<\/a>\u00a0but recognize that cultivating dissent\u2014from anyone, regardless of rank or role\u2014might prove the difference between life and death. For example, Army doctrine\u2014fundamental principles of military organizations\u2014<a href=\"https:\/\/irp.fas.org\/doddir\/army\/adp3_05.pdf\" target=\"_blank\" rel=\"noopener\" download=\"\">emphasizes (<abbr title=\"Portable Document Format\">PDF<\/abbr>)\u00a0<\/a>that special operations forces must know how to employ small teams and individuals as autonomous actors.<\/p>\n<p>Finally, organizations already working to make AI models more transparent could shed light on their inner workings. Secrecy has been\u00a0<a href=\"https:\/\/sfstandard.com\/2024\/06\/08\/openai-security-guards-secretive-mission-office\/\" target=\"_blank\" rel=\"noopener\">ingrained<\/a>\u00a0in how many AI companies operate;\u00a0<a href=\"https:\/\/www.bloomberg.com\/opinion\/articles\/2024-03-12\/openai-and-sam-altman-can-rebuild-trust-by-being-less-secretive\" target=\"_blank\" rel=\"noopener\">rebuilding public trust<\/a>\u00a0could require pulling back that curtain by, for example, more clearly explaining safety processes or publicly responding to criticism.<\/p>\n<blockquote><p>With AI, the stakes of silencing those who don&#8217;t toe the company line, instead of viewing them as vital sources of mission-critical information, are too high to ignore.<\/p><\/blockquote>\n<p>To be sure, group decisionmaking\u00a0<a href=\"https:\/\/royalsocietypublishing.org\/doi\/pdf\/10.1098\/rsos.170193\" target=\"_blank\" rel=\"noopener\">can benefit (<abbr title=\"Portable Document Format\">PDF<\/abbr>)\u00a0<\/a>from pooling information or overcoming individual biases, but too often it results in overconfidence or conforming to group norms. With AI, the stakes of silencing those who don&#8217;t toe the company line, instead of viewing them as vital sources of mission-critical information, are too high to ignore.<\/p>\n<p>It&#8217;s human nature to form tribes\u2014to want to work with and seek support from a tight group of like-minded people. It&#8217;s also admirable, if grandiose, to adopt as one&#8217;s mission nothing less than building tools to tackle humanity&#8217;s greatest challenges. But AI technologies will likely fall short of that lofty goal\u2014rapid yet responsible technological advancement\u2014if its developers fall prey to a fundamental human flaw: refusing to heed hard truths from those who would know.<\/p>\n<p><strong>Fuente:<\/strong> <a href=\"https:\/\/www.rand.org\/pubs\/commentary\/2024\/07\/ai-companies-say-safety-is-a-priority-its-not.html??cutoff=true&amp;utm_source=AdaptiveMailer&amp;utm_medium=email&amp;utm_campaign=7014N000001Snj1QAC&amp;utm_term=00v4N00000YLEH6QAP&amp;org=1674&amp;lvl=100&amp;ite=289518&amp;lea=557608&amp;ctr=0&amp;par=1&amp;trk=a0wQK0000063Ig5YAE\" target=\"_blank\" rel=\"noopener\"><em>https:\/\/www.rand.org<\/em><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Los l\u00edderes de las empresas de inteligencia artificial afirman que se centran en garantizar la seguridad de sus productos. Por ejemplo, han encargado investigaciones, creado&hellip; <\/p>\n","protected":false},"author":1,"featured_media":15220,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[2,23],"tags":[],"_links":{"self":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/15219"}],"collection":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=15219"}],"version-history":[{"count":1,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/15219\/revisions"}],"predecessor-version":[{"id":15221,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/15219\/revisions\/15221"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/media\/15220"}],"wp:attachment":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=15219"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=15219"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=15219"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}