Most of the times, the real use of our Machine Learning model lies at the heart of a product, which may be a small component of an automated mailer system or a chatbot.
Majority of ML folks use R / Python for their experiments. But consumer of those ML models would be software engineers who use a completely different stack. To bridge between this gap, we follow an API first approach. In this session we’ll be looking to create an API wrapper for our Machine Learning models and deploy them using Docker.
The details covered in the session would be: