Deploying Trustworthy LLMs

Krishnaram Kenthapadi

Chief AI Officer
Fiddler AI

Generative AI models and applications are being rapidly deployed across several industries, but there are several ethical and social considerations that need to be addressed. These concerns include lack of interpretability, bias, and discrimination, privacy, lack of model robustness, fake and misleading content, copyright implications, plagiarism, and environmental impact associated with training and inference of generative AI models. In this talk, Dr. Kenthapadi and his team first motivate the need for adopting responsible AI principles when developing and deploying large language models (LLMs) and other generative AI models and provide a roadmap for thinking about responsible AI for generative AI in practice. Focusing on a real-world LLM use case, namely, Fiddler Auditor (https://github.com/fiddler-labs/fiddler-auditor), an open-source toolkit for evaluating robustness of LLMs, we present practical solution approaches/guidelines for applying responsible AI techniques effectively and discuss lessons learned from deploying responsible AI approaches for generative AI applications in practice. By providing real-world generative AI use cases, lessons learned, and best practices, this talk will enable ML practitioners to build more reliable and trustworthy generative AI applications.

Download Slides ↗️

Generative AI models and applications are being rapidly deployed across several industries, but there are several ethical and social considerations that need to be addressed. These concerns include lack of interpretability, bias, and discrimination, privacy, lack of model robustness, fake and misleading content, copyright implications, plagiarism, and environmental impact associated with training and inference of generative AI models. In this talk, Dr. Kenthapadi and his team first motivate the need for adopting responsible AI principles when developing and deploying large language models (LLMs) and other generative AI models and provide a roadmap for thinking about responsible AI for generative AI in practice. Focusing on a real-world LLM use case, namely, Fiddler Auditor (https://github.com/fiddler-labs/fiddler-auditor), an open-source toolkit for evaluating robustness of LLMs, we present practical solution approaches/guidelines for applying responsible AI techniques effectively and discuss lessons learned from deploying responsible AI approaches for generative AI applications in practice. By providing real-world generative AI use cases, lessons learned, and best practices, this talk will enable ML practitioners to build more reliable and trustworthy generative AI applications.