Responsible AI in the Era of Generative AI

About

Large Language Models (LLMs) have showcased remarkable proficiency in tackling Natural Language Processing (NLP) tasks efficiently, significantly reducing time-to-market compared to traditional NLP pipelines. However, upon deployment, LLM applications encounter challenges concerning hallucinations, safety, security, and interpretability. 

With many countries recently introducing guidelines on responsible AI application usage, it becomes imperative to comprehend the principles of constructing and deploying LLM applications responsibly. This hands-on session aims to delve into these critical concepts, offering insights into developing and deploying LLM models alongside implementing essential guardrails for their responsible usage.

Key Takeaways:

  • Understand different guardrails required to deploy LLM apps into production
  • Understand how to detect and mitigate risks using guardrails
  • Implement guardrails using open source library/frameworks

Speaker

video thumbnail
Book Tickets
Stay informed about DHS 2025

Download agenda

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details