speaker detail

Jayita Bhattacharyya

Senior Associate Consultant

company logo

Jayita Bhattacharyya, a Senior Associate Consultant at Infosys Center for Emerging Technology Solutions, is an expert in generative AI. She collaborates with her team to integrate AI seamlessly into existing software products, addressing immediate needs with tailored solutions. With a deep passion for the AI/ML landscape, Jayita has recently achieved victories in the Infosys Data For AI Hackathon and the Informatica Data Engineering 2024 Hackathon. As an official code-breaker and hackathon wizard, she frequently shares her insights on cutting-edge technologies through her blogs, demonstrating her commitment to advancing the field and educating others.

It's time to say bye-bye to naive/vanilla RAG systems where we could easily plug in our sample clean data to query using LLM. Several parameters are needed to upgrade from PoC to production, where performance is a key factor in achieving enhanced results. Search and retrieval systems need proper data preprocessing before being ingested into vector databases. Let us head over to take up a few building blocks to set up such an advanced RAG pipeline that can be deployed and scaled in real time.

Implementing robust and performant RAG systems is the industry's next big goal. Handling multiple operations along with low latency capabilities could present challenges. AI agents have been handy for automating such routing tasks. Observatory tools are the next step for scalability factors, allowing LLM debugging on each encountered step of such workflows. The stack trace helps with app session handling and a deep dive into inferencing flow and outcome. 

This helps uniquely implement LLMs in large knowledge bases and augment them. Manage and control the data to make informed decisions for your business use cases across BFSI, legal, healthcare, and many other similar domains. 

 

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details