speaker detail

Anisha Udayakumar

AI Evangelist

company logo

Anisha is an AI Software Evangelist at Intel, specializing in the OpenVINO™ toolkit. With a solid background as an Innovation Consultant at a leading Indian IT services and consulting firm, she has adeptly steered business leaders towards harnessing emerging technologies for forward-thinking business solutions. One of her notable contributions includes developing vision-based algorithmic solutions that have significantly aided in achieving sustainability goals for a global retail client. 

At Intel, Anisha is dedicated to enriching the developer community. She illuminates the capabilities of the OpenVINO toolkit, aiding developers in elevating their AI projects. Her role involves actively engaging with developers and enhancing their understanding and application of OpenVINO in crafting cutting-edge AI solutions. A lifelong learner and an ardent innovator, Anisha is enthusiastic about exploring and sharing the transformative impact of technology, continually inspiring the developer community with her insights and discoveries.

Explore how to efficiently deploy Generative AI (GenAI) models across various devices, from the cloud to the edge. This session will cover boosting performance, reducing latency, and enhancing flexibility in AI workloads. Through real-world use cases, we will demonstrate the versatility and power of leveraging OpenVINO™ for GenAI deployment. Learn practical implementation steps and key optimization techniques, and gain insights into the latest advancements and future enhancements. This talk aims to equip practitioners with the knowledge to deploy GenAI efficiently across different platforms.

 

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details