{"id":12273,"date":"2023-05-12T09:03:29","date_gmt":"2023-05-12T12:03:29","guid":{"rendered":"https:\/\/www.fie.undef.edu.ar\/ceptm\/?p=12273"},"modified":"2023-05-12T09:03:29","modified_gmt":"2023-05-12T12:03:29","slug":"nvidia-ayuda-a-las-empresas-a-guiar-y-controlar-las-respuestas-de-ia","status":"publish","type":"post","link":"https:\/\/www.fie.undef.edu.ar\/ceptm\/?p=12273","title":{"rendered":"Nvidia ayuda a las empresas a guiar y controlar las respuestas de IA"},"content":{"rendered":"<p>Un desaf\u00edo principal para la IA y los modelos de lenguaje extenso (LLM) en general, es el riesgo de que un usuario obtenga una respuesta inapropiada o inexacta. Nvidia entiende bien esa necesidad y lanz\u00f3 el nuevo framework de c\u00f3digo abierto NeMo Guardrails para ayudar a resolver este desaf\u00edo.<\/p>\n<hr \/>\n<p>A primary challenge for generative AI and\u00a0<a href=\"https:\/\/venturebeat.com\/ai\/whats-next-in-large-language-model-llm-research-heres-whats-coming-down-the-ml-pike\/\" target=\"_blank\" rel=\"noopener\">large language models (LLMs)<\/a>\u00a0overall is the risk that a user can get an inappropriate or inaccurate response.<\/p>\n<p>The need to safeguard organizations and their users is understood well by\u00a0<a href=\"https:\/\/venturebeat.com\/ai\/speech-ai-supercomputing-cloud-gpus-llms-generative-ai-nvidia-next-big-moves\/\" target=\"_blank\" rel=\"noopener\">Nvidia<\/a>, which today released the new\u00a0<a href=\"https:\/\/blogs.nvidia.com\/blog\/2023\/04\/25\/ai-chatbot-guardrails-nemo\/?preview_id=63585\" target=\"_blank\" rel=\"noreferrer noopener\">NeMo Guardrails<\/a>\u00a0open-source framework to help solve the challenge. The NeMo Guardrails project provides a way that organizations building and deploying LLMs for different use cases, including chatbots, can make sure responses stay on track. The guardrails provide a set of controls defined with new policy language to help define and enforce limits to ensure AI responses are topical, safe and do not introduce any security risks.<\/p>\n<blockquote><p>\u201cWe think that every enterprise will be able to take advantage of generative AI to support their businesses,\u201d Jonathan Cohen, vice president of applied research at Nvidia, said during a press and analyst briefing. \u201cBut in order to use these models in production, it\u2019s important that they\u2019re deployed in a way that is safe and secure.\u201d<\/p><\/blockquote>\n<p id=\"h-why-guardrails-matter-for-llms\" class=\"wp-block-heading\"><strong>Why guardrails matter for LLMs<\/strong><\/p>\n<p>Cohen explained that a guardrail is a guide that helps keep the conversation between a human and an AI on track.<\/p>\n<p>The way Nvidia is thinking about AI guardrails, there are three primary categories where there is a specific need. The first category are topical guardrails, which are all about making sure that an AI response literally stays on topic. Topical guardrails are also about making sure that the response remains in the correct tone.<\/p>\n<p>Safety guardrails are the second primary category and are designed to make sure that responses are accurate and fact checked. Responses also need to be checked to ensure they are ethical and don\u2019t include any sort of toxic content or misinformation. Cohen acknowledged the general concept of AI \u201challucinations\u201d as to why there is a need for safety guardrail. With an AI hallucination, an LLM generates an incorrect response if it doesn\u2019t have the correct information in its knowledge base.<\/p>\n<p>The third category of guardrails where Nvidia sees a need is security. Cohen commented that as LLMs are allowed to connect to third-party APIs and applications, they can become an attractive\u00a0<a href=\"https:\/\/venturebeat.com\/security\/how-prompt-injection-can-hijack-autonomous-ai-agents-like-auto-gpt\/\" target=\"_blank\" rel=\"noopener\">attack surface<\/a>\u00a0for cybersecurity threats.<\/p>\n<p>\u201cWhenever you allow a language model to actually execute some action in the world, you want to monitor what requests are being sent to that language model,\u201d Cohen said.<\/p>\n<p id=\"h-how-nemo-guardrails-works\" class=\"wp-block-heading\"><strong>How NeMo Guardrails works<\/strong><\/p>\n<p>With NeMo Guardrails, what Nvidia is doing is adding another layer to the stack of tools and models for organizations to consider when deploying AI-powered applications.<\/p>\n<p>The Guardrails framework is code that is deployed between the user and an LLM-enabled application. NeMo Guardrails can work directly with an LLM or with LangChain. Cohen noted that many modern AI applications use the open-source\u00a0<a href=\"https:\/\/github.com\/hwchase17\/langchain\" target=\"_blank\" rel=\"noreferrer noopener\">LangChain<\/a>\u00a0framework to help build applications that chain together different components from LLMs.<\/p>\n<p>Cohen explained that NeMo Guardrails monitors conversations both to and from the LLM-powered application with a sophisticated contextual dialogue engine. The engine tracks the state of the conversation and provides a programmable way for developers to implement guardrails.<\/p>\n<p>The programmable nature of NeMo Guardrails is enabled with the new Colang policy language that Nvidia has also created. Cohen said that Colang is a domain-specific language for describing conversational flows.<\/p>\n<p>\u201cColang source code reads very much like natural language,\u201d Cohen said. \u201cIt\u2019s a very easy to use tool, it\u2019s very powerful and it lets you essentially script the language model in something that looks almost like English.\u201d<\/p>\n<p>At launch, Nvidia is providing a set of templates for pre-built common policies to implement topical, safety and security guardrails. The technology is freely available as open source and Nvidia will also provide commercial support for enterprises as part of the<a href=\"https:\/\/venturebeat.com\/ai\/nvidia-ai-enterprise-3-0-released-today-adds-new-application-workflows-and-partnership-with-deutsche-bank\/\" target=\"_blank\" rel=\"noopener\">\u00a0Nvidia AI enterprise<\/a>\u00a0suite of software tools.<\/p>\n<p>\u201cOur goal really is to enable the ecosystem of large language models to evolve in a safe, effective and useful manner,\u201d Cohen said. \u201d It\u2019s difficult to use language models if you\u2019re afraid of what they might say, and so I think guardrail solves an important problem.\u201d<\/p>\n<div id=\"boilerplate_2660155\" class=\"post-boilerplate boilerplate-after\">\n<p><strong>VentureBeat&#8217;s mission<\/strong>\u00a0is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.\u00a0<a href=\"https:\/\/info.venturebeat.com\/website-preference-center.html?utm_source=VBsite&amp;utm_medium=bottomBoilerplate\" target=\"_blank\" rel=\"noopener\" data-type=\"URL\" data-id=\"https:\/\/info.venturebeat.com\/website-preference-center.html\">Discover our Briefings.<\/a><\/p>\n<\/div>\n<p><strong>Fuente:<\/strong> <a href=\"https:\/\/venturebeat.com\/ai\/nvidia-helps-enterprises-guide-and-control-ai-responses-with-nemo-guardrails\/amp\/\" target=\"_blank\" rel=\"noopener\"><em>https:\/\/venturebeat.com<\/em><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Un desaf\u00edo principal para la IA y los modelos de lenguaje extenso (LLM) en general, es el riesgo de que un usuario obtenga una respuesta&hellip; <\/p>\n","protected":false},"author":1,"featured_media":12274,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[37,23],"tags":[],"_links":{"self":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/12273"}],"collection":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=12273"}],"version-history":[{"count":1,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/12273\/revisions"}],"predecessor-version":[{"id":12275,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/12273\/revisions\/12275"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/media\/12274"}],"wp:attachment":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=12273"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=12273"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=12273"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}