Nvidia is the tech giant behind the GPUs that power our games, run our creative suites, and - as of late - play a crucial role in training the generative AI models behind chatbots like ChatGPT. The company has dived deeper into the world of AI with the announcement of new software that could solve a big problem chatbots have - going off the rails and being a littleā¦strange.
The newly-announced āNeMo Guardrailsā is a piece of software designed to ensure that smart applications powered by large language models (LLMs) like AI chatbots are āaccurate, appropriate, on topic and secureā. Essentially, the guardrails are there to weed out inappropriate or inaccurate information generated by the chatbot, stop it from getting to the user, and inform the bot that the specific output was bad. Itāll be like an extra layer of accuracy and security - now without the need for user correction.
The open-source software can be used by AI developers to set up three types of boundaries for the AI models: Topical, safety and security guidelines. Itāll break down the details of each - and why this sort of software is both a necessity and a liability.
[HEADING=1]What are the guardrails?[/HEADING]
Topical guardrails will prevent the AI bot from dipping into topics in areas that are not related or necessary to the use or task. In the statement from Nvidia, we are given the example of a customer service bot not answering questions about the weather. If youāre talking about the history of energy drinks, you wouldnāt want ChatGPT to start talking to you about the stock market. Basically, keeping everything on topic.
This would be useful in huge AI models like Microsoftās Bing Chat, which has been known to get a bit off-track at times, and could definitely ensure we avoid more tantrums and inaccuracies.
The Safety guardrail will tackle misinformation and āhallucinationsā - yes, hallucinations - and will ensure the AI will respond with accurate and appropriate information. This means itāll ban inappropriate language, reinforce credible source citations as well as prevent the use of fictitious or illegitimate sources. This is especially useful for ChatGPT as weāve seen many examples across the internet of the bot making up citations when asked.
And for the security guardrails, these will simply stop the bot from reaching external applications that are ādeemed unsafeā - in other words, any apps or software it hasnāt been given explicit permission and purpose to interact with, like a banking app or your personal files. This means youāll be getting streamlined, accurate, and safe information each time you use the bot.
[HEADING=1]Morality Police [/HEADING]
Nvidia says that virtually all software developers can use NeMo Guardrails since they are simple to use and work with a broad range of LLM-enabled applications, so we should hopefully start seeing it stream into more chatbots in the near future.
While this is not only an integral āupdateā weāre getting on the AI front itās also incredibly impressive. Software dedicated to monitoring and correcting models like ChatGPT dictated by stern guidelines from developers is the best way to keep things in check without worrying about doing it yourself.
That being said, as there are no firm governing guidelines, we are beholden to the morality and priorities of developers rather than being driven by actual wellness concerns. Nvidia, as it stands, seems to have usersā safety and protection at the heart of the software but there is no guarantee those priorities wonāt change, or that developers using the software may have different moral guidelines or concerns.
Continue readingā¦
The newly-announced āNeMo Guardrailsā is a piece of software designed to ensure that smart applications powered by large language models (LLMs) like AI chatbots are āaccurate, appropriate, on topic and secureā. Essentially, the guardrails are there to weed out inappropriate or inaccurate information generated by the chatbot, stop it from getting to the user, and inform the bot that the specific output was bad. Itāll be like an extra layer of accuracy and security - now without the need for user correction.
The open-source software can be used by AI developers to set up three types of boundaries for the AI models: Topical, safety and security guidelines. Itāll break down the details of each - and why this sort of software is both a necessity and a liability.
[HEADING=1]What are the guardrails?[/HEADING]
Topical guardrails will prevent the AI bot from dipping into topics in areas that are not related or necessary to the use or task. In the statement from Nvidia, we are given the example of a customer service bot not answering questions about the weather. If youāre talking about the history of energy drinks, you wouldnāt want ChatGPT to start talking to you about the stock market. Basically, keeping everything on topic.
This would be useful in huge AI models like Microsoftās Bing Chat, which has been known to get a bit off-track at times, and could definitely ensure we avoid more tantrums and inaccuracies.
The Safety guardrail will tackle misinformation and āhallucinationsā - yes, hallucinations - and will ensure the AI will respond with accurate and appropriate information. This means itāll ban inappropriate language, reinforce credible source citations as well as prevent the use of fictitious or illegitimate sources. This is especially useful for ChatGPT as weāve seen many examples across the internet of the bot making up citations when asked.
And for the security guardrails, these will simply stop the bot from reaching external applications that are ādeemed unsafeā - in other words, any apps or software it hasnāt been given explicit permission and purpose to interact with, like a banking app or your personal files. This means youāll be getting streamlined, accurate, and safe information each time you use the bot.
[HEADING=1]Morality Police [/HEADING]
Nvidia says that virtually all software developers can use NeMo Guardrails since they are simple to use and work with a broad range of LLM-enabled applications, so we should hopefully start seeing it stream into more chatbots in the near future.
While this is not only an integral āupdateā weāre getting on the AI front itās also incredibly impressive. Software dedicated to monitoring and correcting models like ChatGPT dictated by stern guidelines from developers is the best way to keep things in check without worrying about doing it yourself.
That being said, as there are no firm governing guidelines, we are beholden to the morality and priorities of developers rather than being driven by actual wellness concerns. Nvidia, as it stands, seems to have usersā safety and protection at the heart of the software but there is no guarantee those priorities wonāt change, or that developers using the software may have different moral guidelines or concerns.
Continue readingā¦