speaker detail

Joinal Ahmed

AI Architect

company logo

Joinal is a seasoned Data Science expert passionate about rapid prototyping, community involvement, and driving technology adoption. With a robust technical background, he excels in leading diverse teams through ML projects, recruiting and mentoring talent, optimizing workflows, and establishing top-tier MLOps & Data platforms for high-performance analytics solutions.

The future of enterprise AI hinges on customized language models optimized for specific domains. Small Language Models (SLMs) are more computationally efficient, requiring less memory and storage, and are often more effective during inference. Training and deploying SLMs is cost-effective, making them accessible to a broader range of businesses and ideal for edge computing applications. SLMs are more adaptable to specialized applications and can be fine-tuned for specific tasks more efficiently than larger models. In this session, we will explore the potential of fine-tuning SLMs, such as Gemma, to develop an AI-powered medical chatbot that can assist patients and healthcare providers by answering general medical questions about diseases, health conditions, and treatment options, as well as summarizing complex medical documents or articles with ease.

Read More

Gear up for an enlightening and comparative session at this year’s DataHack Summit! In this highly anticipated hack panel, we bring together leading AI practitioners to evaluate and compare various open-source & commercial Large Language Models (LLMs) across different tasks.

  • Phi3 vs GPT 4o vs Llama 3

    This panel offers a unique opportunity to delve into the strengths, weaknesses, and performance of these models in real-world applications.
Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details