Generative image recognition leverages generative AI to analyze images, automatically creating descriptive labels and extracting comprehensive information. This technology can interpret and understand visual content, generating insights and answering queries about the image. By combining advanced machine learning techniques with image analysis, it enhances the accuracy and depth of image recognition, making it possible to identify objects, lucid details about the object, and categories, and even infer context within the visual data. This deeper understanding allows for more nuanced insights and more accurate answers to questions about the image.
Read MoreAutonomous AI agents are reshaping the AI landscape by revolutionizing how we interact with technology and the capabilities of LLM-powered AI systems. These agents, which can independently perceive their environment, make decisions, and take actions without human intervention, are becoming increasingly prevalent across various industries and applications.
In this talk, "Agentic AI: The Rise of Autonomous AI Agents and LangGraph," we will delve into the emerging theme of agentic AI and explore the progressive levels of maturity in constructing generative AI applications using large language models (LLMs).
Attendees will be introduced to the world of autonomous AI agents, understanding what an agent is and how it differs from other maturity levels of building a Gen AI application using LLMs like Retrieval-Augmented Generation (RAG) systems. The session will cover the key components and operational principles of agentic AI, highlighting popular frameworks like Chain of Thought (CoT) and ReACT that guide the cognitive processes of autonomous agents. Additionally, we will examine how to enhance RAG applications with agentic RAG, pushing the boundaries of what these systems can achieve.
LangGraph is closely related to agentic AI, providing a framework for defining agentic AI workflows as graphs. Through demonstrations of LangGraph, we will witness the powerful capabilities of these autonomous agents. We will explore LangGraph's functionalities and various architectures that exemplify agentic AI systems, such as Supervisor, Self Reflection, and Human Reflection.
Read MoreEveryone knows how to build RAG systems, but how do you improve them? Retrieval Augmented Generation (RAG) systems have quickly become among the industry's biggest successes for driving Generative AI use cases on custom enterprise data. However, with their success comes a whole list of pain points that can lead to failure or sub-optimal performance in RAG systems.
This session is inspired by the famous paper “Seven Failure Points When Engineering a Retrieval Augmented Generation System” by Barnett et al., which discusses some of the major challenges and points of failure in RAG Systems. However, clear solutions to these challenges are not mentioned in detail.
This session aims to bridge this gap where we will cover the major challenges and pain points when building real-world RAG systems, which include:
Besides discussing the challenges, we will also discuss practical solutions of how we could address these challenges using the latest and best techniques, including:
The overall structure of the talk would involve discussing each challenge, discussing potential solutions, and also showcasing some of these with hands-on code leveraging popular frameworks like LangChain and LlamaIndex.
Read MoreHave you ever been confused between Matt Damon and Mark Wahlberg, Daniel Radcliffe and Elijah Wood, or Margot Robbie and Jaime Pressly? This talk delves into the fascinating world of finding actor look-alikes using multi-modal large language models (LLMs).
By leveraging textual and visual data, these advanced models create detailed embeddings of actors, allowing for the identification of similar-looking pairs. We will explore how these embeddings can uncover clusters of look-alikes within the vast universe of actors and determine the degree of resemblance between different actors, providing insights into the intriguing overlaps in facial features across Hollywood.
Read MoreThis session focuses on developing multilingual Generative AI inspired by the Aya project. You will see a demonstration of how to fine-tune an open-weights aya-23-8B model on a new language using QLORA. The session will emphasize the importance of multilingual capabilities in generative AI, their challenges and opportunities, and practical steps for evaluation and safety considerations.
Read MoreIn this interactive, hands-on session, we will explore the fascinating world of Generative AI, starting with its key concepts, applications, and real-world impact. We'll introduce the Intel® Tiber™ Developer Cloud, highlighting its features and benefits, including building and maintaining AI projects easily on the cloud without worrying about keeping your infrastructure.
The session will feature live demonstrations of Stable Diffusion, showcasing text-to-image and image-to-image generation. It will also cover a detailed demo and explanation of building a Retrieval-Augmented Generation (RAG) system on the cloud. Making AI models more efficient is of paramount importance today. We will highlight the synergy between HuggingFace and Intel®, demonstrating how popular open AI models can run more efficiently on Intel® hardware. The talk will conclude with an interactive Q&A session.
Over the past two years, Generative AI has been thrown at every problem, from its traditional forte of text and image generation to planning problems and time series forecasting. Keeping in mind that transformer and diffusion-based models are both probabilistic inference engines, the question in every practitioner's mind is: How can I leverage GenAI without leaving myself at risk if (when?) things go wrong? This talk will focus on the responsible use of GenAI techniques in decision-making systems. We will cover the strengths and weaknesses of generative approaches in quantitative domains and discuss ways their vast pre-training can be leveraged while retaining the robustness of automated systems. To help the discussion, we will discuss how (and more importantly, if) we can use LLMs in time series forecasting, logistics planning, and where diffusion models are a better bet than transformer-based architectures.
Read MoreThis session offers an in-depth exploration of leveraging CUDA to optimize NVIDIA hardware, essential for the explosive growth of Generative AI applications. Generative AI models, which include image and text generation and code completion, rely heavily on accelerated hardware for efficient training and inference. While high-level frameworks like PyTorch and TensorFlow simplify the process, true optimization and control are unlocked through CUDA, NVIDIA’s low-level compiler interfaces directly with GPUs.
The session begins with thoroughly reviewing C programming fundamentals, ensuring a solid base. It then demystifies core CUDA concepts, including threads, blocks, grids, and memory hierarchies, teaching participants to think in parallel for efficient GPU utilization. The course is designed to be highly interactive, with practical sessions guiding learners through writing their kernels, the essential workhorses of CUDA programs. This hands-on approach provides participants with real-world experience in parallel programming, ensuring they understand and effectively utilize the power of parallel processing in Generative AI.
Read MoreIn this engaging session, we will explore the world of large language model (LLM) agents and sophisticated AI systems that extend the capabilities of standard language models to execute specific, complex tasks. We will demystify the core components that define LLM agents: the Agent Core, Memory Systems, Tools, and Planning Module, detailing how these elements integrate to enhance decision-making and task execution.
We will explore a range of practical applications of AI agents in finance, demonstrating how LLM agents can help you find and analyze financial data for you. For instance, If you want to see today's market trends and news affecting the stock market, you can just run your agent, and you will get the information immediately.
The session's highlight will be a live demonstration, during which attendees will see an LLM agent. This demonstration will focus on a simulated scenario where agents operate and work with financial data.
Read MoreExplore the common pitfalls of GenAI implementation and learn how to avoid them. This session delves into why many GenAI projects fail and offers practical strategies for success. Gain insights into best practices for deploying effective, innovative GenAI systems that enhance business outcomes.
Read MoreExplaining the results of an AI model is essential in many cases to make it acceptable to business users and regulators. The complexity of defining the model increased exponentially with the advent of massive deep neural models with billions of connections in the new age of Gen AI models. This session provides an efficient approach to building Explainability in AI models. The attendees will be introduced to state of the art in Explainability with live examples.
Large Language Models (LLMs) and in-context learning have introduced a new paradigm for developing natural language understanding systems: prompts are all you need! Prototyping has never been easier, but not all prototypes give a smooth path to production. In this talk, titled "Reality is Not an End-to-End Prediction Problem: Applied NLP in the Age of Generative AI," I'll share the most important lessons we've learned from solving real-world information extraction problems in industry and show you a new approach and mindset for designing robust and modular NLP pipelines in the age of Generative AI.
Read MoreJoin us for an immersive workshop designed to unveil the essence of machine learning and enhance your technical prowess. This workshop covers everything from foundational concepts like regression and classification to advanced topics such as model evaluation and tree-based models. Dive into prompt engineering, explore linear and tree-based models, and master advanced ML workflows to elevate your data science skills to new heights.
Read MoreOverview
In today's rapidly evolving digital landscape, Retrieval-Augmented Generation (RAG) has emerged as a transformative technology, permeating various industries and shaping the future of artificial intelligence. RAG's ability to seamlessly integrate retrieval and generation capabilities has unlocked unprecedented possibilities, revolutionizing how we approach problem-solving, decision-making, and knowledge creation.
RAG is an AI framework that enhances the accuracy and reliability of large language models (LLMs) by allowing them to retrieve relevant information from external knowledge sources before generating a response. This helps ground the LLM's outputs in factual data, reducing the risk of hallucinating incorrect or misleading information.
Big Business Benefits of RAG
Welcome to the "Building GenAI Applications Using RAG" workshop, a comprehensive journey from absolute beginner to advanced RAG application developer. Throughout this immersive experience, participants will progress from foundational concepts to building sophisticated RAG systems, all driven by hands-on learning.
The workshop begins by demystifying the evolution of Generative AI and clarifying key terms, guiding even novices through the complex landscape. Participants will explore various GenAI approaches, equipping them with the knowledge to make informed decisions for their specific business applications. From there, attendees dive into the heart of RAG, starting with tokenization and advancing to building practical RAG applications using Langchain. Each step is hands-on, ensuring tangible outcomes and solid takeaways. As the day unfolds, participants master advanced retrieval strategies, query expansion, and evaluation techniques, honing their skills for real-world application. Bonus topics on super-advanced concepts await, time permitting.
Through this workshop, participants emerge with the confidence and capability to navigate the complexities of Generative AI, from theory to application, and get started on a transformative journey toward becoming confident RAG developers.
Prerequisite: Basic knowledge of Python, Fundamentals of Transformers Architecture (nice to have)
Read MoreThis workshop is designed to provide a comprehensive overview of LLMs, from foundational concepts to advanced applications. Whether you're a beginner or have intermediate experience, you will gain valuable insights and hands-on experience with some of the most cutting-edge technologies in the field.
LLMs have taken the world by storm since their inception, and the past year has marked a significant shift in the AI industry and its impact on our day-to-day lives.
As an engineer working on LLMs, tackling the challenges of collaborating, training, scaling, and monitoring such massive models has become increasingly complex. LLMOps encompasses the practices, techniques, and tools necessary for the operational management of large language models in production. It's the infrastructure created by LLMOps that drives efficiency, agility, security, and scalability for both its engineers and end-users.
Join us in this immersive LLMOps workshop, where we'll embark on a day-long journey, delving into various modules crafted to equip you with actionable insights and hands-on skills to harness the full potential of LLMs.
Prerequisite: AWS Account with Sagemaker, EKS and Bedrock full access
Read MoreWelcome to the cutting edge of financial innovation, where the intersection of artificial intelligence and finance is transforming the landscape of opportunity. In this workshop, "GenAI for Finance: Applications & Responsible Use," we invite you to explore the fusion of technological artistry with financial pragmatism. In this workshop, we will build GenAI-based applications to extract insights from financial reports—such as earnings and annual reports—enabling you to make more informed, data-driven investment decisions confidently. We will also build GenAI-based agents to connect to open-source APIs to extract financial data and create trading signals.
Through our immersive modules, you'll delve into the intricacies of Retrieval-Augmented Generation (RAGs) and AI Agents, learning to harness these tools skillfully to generate insights from financial documents. This journey, however, isn't solely about mastering technical skills; it's about striking a balance between innovation and responsibility. As we navigate the high-stakes realm of algorithm-driven decisions, it's crucial to approach these choices with care and ethical consideration. We'll also explore how to use language models responsibly while generating insights, ensuring that our advancements in AI are both effective and conscientious.
Read MoreHave you ever used groundbreaking technologies such as MidJourney and Stable Diffusion for your professional and personal work? I am sure you did. These technologies have taken the world by storm and have become part of our lives. Wondering how these technologies work and what makes them so incredibly effective? The answer lies in the power of Diffusion Models.
The diffusion models have become the backbone of modern computer vision! From Dall-E 2 to Midjourney, these powerful models have revolutionized the way machines understand and process information. But what exactly are Diffusion Models, and how do they work? Don’t worry! You are at the right place. Welcome to the workshop on Diffusion models!
Read MoreBy the end of the workshop you will be able to:
In this workshop, you will get a comprehensive introduction into the world of training and fine-tuning Large Language Models (LLMs) and Generative AI. Over the course of five modules, you will explore the essentials of LLMs, deep-dive into training and fine-tuning methodologies, and learn about parameter-efficient techniques like Low rank Adaptation (LoRA), Quantized low rank Adaptation (QLoRA) and instruction-based fine-tuning using techniques like Supervised Fine-tuning and Reinforcement Learning with Human Feedback. You will also learn about the advantages, disadvantages and best practices for these techniques. This is not a university lecture. We will follow a hybrid approach where you will learn the concepts behind these techniques and also spend a lot of time with live hands-on sessions, actually training and fine-tuning LLMs using tools like the HuggingFace ecosystem and Unsloth AI.
In this workshop, you'll journey through Reinforcement Learning (RL), starting with fundamental concepts and advancing to complex techniques, focusing on real-world applications. Time permitting, you'll also explore how Large Language Models (LLMs) can optimize RL reward functions in a human-centric manner. Whether you're a seasoned AI professional or just beginning, this workshop equips you with the skills and knowledge to tackle real-world challenges using RL.
Discover how cutting-edge technologies leverage Reinforcement Learning (RL) to achieve groundbreaking results! For instance, the revolutionary generative model ChatGPT utilizes RL techniques behind the scenes. The core principle driving ChatGPT is Reinforcement Learning from Human Feedback (RLHF), which aligns Large Language Models (LLMs) with human preferences. This demonstrates the immense potential of RL to solve real-world problems and transform industries. Join our workshop to harness the power of RL and become a part of the AI revolution!
Read MoreExplore the evolution of Generative AI, enterprise applications, and the hardware essentials for large language models. Delve into Intel's AI-enhancing technologies, from Xeon® and Gaudi® AI Accelerators to the Core Ultra series. Engage in hands-on sessions optimizing inference and deploying AI models on Intel® Developer Cloud.
Read MoreRight now, people all over the world are going bonkers over something called ChatGPT. In this workshop, we’ll learn the basic concepts behind how ChatGPT works and then learn how to code, train, and use our own version of it from scratch using PyTorch. From this coding experience, we’ll learn about the strengths and weaknesses of models like ChatGPT, as well as discuss alternative design strategies. Then we’ll learn how to fine-tune a production language model on a custom dataset. Fine-tuning on a custom dataset gives us more control over how the model behaves and can make it more reliable.
NOTE: This workshop will be done in “StatQuest Style” meaning every little detail will be clearly explained. We’ll also start each module with a silly song.
Read MoreJoin leading data scientists and GenAI researchers to solve and build practical GenAI solutions that translate to immediate impact in your job role.
Experience a vibrant congregation of the world's brightest GenAI minds and explore groundbreaking innovations shaping the future of GenAI.
Connect with the top AI thought leaders to gain exclusive insights, build valuable relationships, and discover actionable strategies to implement in your own work.
Celebrate the groundbreaking achievements and inspiring journeys of the visionaries leading the charge in AI advancements.
Engage in a thrilling challenge where 4 participants compete to generate the closest matching image and solve coding problems. Winners win Cash Rewards.
Create the best pick-up lines against an AI Avatar. Audience votes determine the winner in real-time.
Scan your palm for future predictions in various life aspects. Play Compatibility Test with friends.
Capture and creatively alter photos with a friend. Get instant prints and share on social media.
Perfect your favorite celebrity's walk with MediaPipe analysis. Get a breakdown and challenge friends.
Race against an opponent in a virtual car race powered solely by your mind!
DAYS
HR
MIN
SEC
Conference only
7-9 Aug
Conference + Workshop
7-10 Aug
A prestigious recognition of outstanding individuals at the forefront of AI innovation.
Get in touch with us for sponsorship and event details
Shveta Gupta
Ummed Saini
This is India's premier GenAI conference that brings together the brightest minds in the field. Discover cutting-edge Generative AI technologies, gain hands-on practical skills, and connect with a vibrant AI community to share the passion for AI.
The event will take place over the course of 4 days from August 7th to 10th, 2024, at the NIMHANS Convention Centre in Bengaluru. While the conference is for 3 days, the 4th day(last day) consists of day-long Workshops which are conducted in a classroom setup. The workshop venue details will be announced shortly.
This year’s theme is about defining the new world order powered by Generative AI. This groundbreaking technology is reshaping everything from industries and economies to healthcare and entertainment. Join us and delve into the cutting edge of Generative AI. Discover how it is impacting your field, gain the skills to harness its power, and become a leader in this transformative era.
Yes. We offer special group discounts to ensure your experience is one to remember. Just drop us an email at [email protected] or call us at +91 9871108700
DataHack Summit 2024 offers two types of tickets. The first option, Conference + HackDay, is ideal for those looking to immerse themselves in technical talks, network with industry leaders, and participate in intriguing hack sessions. The second option, Conference + HackDay + Workshop, provides access to all of the above plus an intensive workshop designed to help participants master a specific GenAI challenge. For discounts and price-related information, please refer to the pricing page.
DataHack Summit 2024 tickets are live on our website https://www.analyticsvidhya.com/datahacksummit/. We offer offer two main tracks to cater to diverse learning goals and interests. Head over to the website, choose the track that aligns with your learning goals, and secure your pass! You can also book your passes directly from https://in.explara.com/e/datahack-summit-2024/checkout
Yes, Datahack Summit 2024 welcomes speakers from various backgrounds and expertise in data science and AI. We encourage you to submit your proposal through our Call for Speakers page. Share your knowledge, inspire the next generation of AI talent, and be a part of this transformative event!
GenAI Hack Sessions provide brief, expert-led demonstrations of data-driven solutions in an auditorium setting, ideal for quick insights. In contrast, Workshops offer extensive, interactive learning opportunities with hands-on problem-solving in a classroom setting, focusing on skill-building and deep understanding.
DataHack Summit 2024 is designed to foster collaboration and interaction between the brightest minds in AI. We offer a multitude of dedicated networking opportunities like collaborative sessions, social events and more. Beyond scheduled events, the entire DataHack Summit experience is designed to spark connections. From coffee breaks to lunchtime conversations, you'll be surrounded by inspiring individuals who share your passion for AI.
Yes, attendees at DataHack Summit 2024 will be provided with lunch and high tea during the conference. These refreshments offer an excellent opportunity for fostersing both knowledge acquisition and valuable networking opportunities.
DataHack Summit 2024 offers exciting sponsorship opportunities for companies looking to showcase their brand, network with key decision-makers, and support the future of Generative AI. We offer a variety of sponsorship packages tailored to meet your specific goals and budget. To explore these options and discuss how you can become a valued partner of DataHack Summit 2024, please contact our sponsorship team at [email protected].