Fine-tuning & Inference of Small Language Models like Gemma

About

The future of enterprise AI hinges on customized language models optimized for specific domains. Small Language Models (SLMs) are more computationally efficient, requiring less memory and storage, and are often more effective during inference. Training and deploying SLMs is cost-effective, making them accessible to a broader range of businesses and ideal for edge computing applications. SLMs are more adaptable to specialized applications and can be fine-tuned for specific tasks more efficiently than larger models. In this session, we will explore the potential of fine-tuning SLMs, such as Gemma, to develop an AI-powered medical chatbot that can assist patients and healthcare providers by answering general medical questions about diseases, health conditions, and treatment options, as well as summarizing complex medical documents or articles with ease.

Key Takeaways:

  • Attendees will gain a solid understanding of the advantages of small language models.
  • They'll have hands-on experience in fine-tuning Gemma for question-answering tasks.
  • They'll be equipped with the knowledge and tools to create fine-tuned models.

Speaker

Book Tickets
Stay informed about DHS 2025

Download agenda

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details