Building GenAI Applications using RAGs

10 AUGUST 2024 | 09:30AM - 05:30PM

About the workshop

Overview 

In today's rapidly evolving digital landscape, Retrieval-Augmented Generation (RAG) has emerged as a transformative technology, permeating various industries and shaping the future of artificial intelligence. RAG's ability to seamlessly integrate retrieval and generation capabilities has unlocked unprecedented possibilities, revolutionizing how we approach problem-solving, decision-making, and knowledge creation.

RAG is an AI framework that enhances the accuracy and reliability of large language models (LLMs) by allowing them to retrieve relevant information from external knowledge sources before generating a response. This helps ground the LLM's outputs in factual data, reducing the risk of hallucinating incorrect or misleading information.

Big Business Benefits of RAG

  • Access to Reliable Information: RAG ensures LLMs access current, credible facts, allowing users to verify sources.
  • Dynamic Data Integration: RAG integrates new data seamlessly, reducing the need for frequent retraining.
  • Cost Efficiency: RAG lowers computational and financial burdens compared to constant retraining.
  • Improved Response Accuracy: RAG enhances LLMs' ability to recognize and probe for more details, ensuring accurate responses.

Welcome to the "Building GenAI Applications Using RAG" workshop, a comprehensive journey from absolute beginner to advanced RAG application developer. Throughout this immersive experience, participants will progress from foundational concepts to building sophisticated RAG systems, all driven by hands-on learning. 

The workshop begins by demystifying the evolution of Generative AI and clarifying key terms, guiding even novices through the complex landscape. Participants will explore various GenAI approaches, equipping them with the knowledge to make informed decisions for their specific business applications. From there, attendees dive into the heart of RAG, starting with tokenization and advancing to building practical RAG applications using Langchain. Each step is hands-on, ensuring tangible outcomes and solid takeaways. As the day unfolds, participants master advanced retrieval strategies, query expansion, and evaluation techniques, honing their skills for real-world application. Bonus topics on super-advanced concepts await, time permitting. 

Through this workshop, participants emerge with the confidence and capability to navigate the complexities of Generative AI, from theory to application, and get started on a transformative journey toward becoming confident RAG developers.

Prerequisite: Basic knowledge of Python, Fundamentals of Transformers Architecture (nice to have)

Instructor

speaker image

Arun Prakash Asokan

Associate Director Data Science

company logo

Modules

Getting Started with Generative AI

  • The Evolution of Gen AI. Let's get the terms right!
  • Gen AI Recap and Big Moments
  • Traditional Style vs. Gen AI Style
  • Secret Sauce behind the Magic of Gen AI
  • Transformers architecture: an intuitive perspective & its biggest innovations

Multiple GenAI Approaches for Real-World Problems

  • Build LLM vs. Full Fine Tune vs. Partial Fine Tine vs. Prompt Engineering
  • Sneak peek into each approach
  • Guidelines to choose the right Gen AI approach for your business application
  • Comparative study and summary
  • Practical challenges in using Gen AI at enterprises

Tokenization

  • Practical understanding of tokens
  • Impact of tokens and its importance
  • Let's understand the world of Tokenization (Hands-on driven learning)

Embeddings & Vector DB

  • World of Embeddings
  • Let’s play with Embeddings (Hands-on driven learning)
  • Introduction to vector databases and critical concepts

Building Practical LLM Applications: RAG using Langchain

  • Key steps in building a RAG application
  • Introduction to Langchain
  • Loading a Variety of Documents
  • Strategies for Data Chunking
  • Building Vector Stores
  • Retrieval Techniques and their Importance
  • Magic of chains

RAG using Open Source LLMs

  • Open Source LLMs Vs. Closed source LLMs
  • Popular open-source LLMs
  • Hands-on RAG using Open Source LLMs

Building Advanced RAG Systems

  • Advanced Retrieval Strategies
  • Practical tips and tricks to build RAG systems
  • Query Expansion
  • Cross Encoder based Reranking

Evaluation of RAG Systems

  • What is the evaluation of LLM responses?
  • Various RAG Evaluation Metrics and its Importance
  • Evaluation of the RAG system (Hands-on)

Agentic AI: RAG using Autonomous AI Agents (Bonus Session, if time permits)

  • What is an Agent?
  • How is it different from simple RAG?
  • Components in Agents
  • Agent thinking Frameworks(CoT, ReACT etc)
  • Implementation of Agentic AI using LangGraph (Hands-On)
  • LangGraph Architectures (Supervisor, Self Reflection, Human Reflection, etc)
*Note: These are tentative details and are subject to change.
Download Brochure

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details