■ NewsInsight from the intersection of AI and business
Product updates, research, customer stories, and perspectives from the team building AI at S&P Global.
NewsArchive
An Introduction to mitigating toxicity in LLMs - Pt. 2
In our last post, we introduced toxicity as a challenge when building LLMs and various methodologies to mitigate it. In this post, we’re going to focus on utilizing one of those methodologies — auxiliary tools — and assessing some tools that are available to the public.
An introduction to mitigating toxicity in LLMs
A vital aspect of “productionalizing” Large Language Models (LLMs) is to ensure a safe user experience and promote alignment with societal standards. LLMs excel at generating language and answering user-input questions, even some of the hardest ones. However, when given a malicious prompt, LLMs may also try to answer that with an unsafe response.
■ Get startedTalk to our team about what’s possible
At Kensho, you’ll work at the intersection of trusted data, advanced AI, and real-world decision-making. We hire talented people and give them autonomy and support to develop cutting-edge solutions that solve complex business problems.