The AI revolution is upon us, but in between this chaos a very critical question gets overlooked by most of us – How do we maintain these sophisticated AI systems? That’s where Machine Learning Operations (MLOps) comes into play. In this blog we will understand the importance of MLOps with ZenML, an open-source MLOps framework, by building an end-to-end Project.
This article was published as a part of the Data Science Blogathon.
MLOps empowers Machine Learning Engineers to streamline the process of a ML model lifecycle. Productionizing machine learning is difficult. The machine learning lifecycle consists of many complex components such as data ingest, data prep, model training, model tuning, model deployment, model monitoring, explainability, and much more. MLOps automates each step of the process through robust pipelines to reduce manual errors. It is a collaborative practice to ease your AI infrastructure with minimum manual efforts and maximum efficient operations. Think of MLOps as the DevOps for AI industry with some spices.
ZenML is an Open-Source MLOps framework which simplifies the development, deployment and management of machine learning workflows. By harnessing the principle of MLOps, it seamlessly integrates with various tools and infrastructure which offers the user a modular approach to maintain their AI workflows under a single workplace. ZenML provides features like auto-logs, meta-data tracker, model tracker, experiment tracker, artifact store and simple python decorators for core logic without complex configurations.
Now we will understand how MLOps is implemented with the help of an end-to-end simple yet production grade Data Science Project. In this project we will create and deploy a Machine Learning Model to predict the customer lifetime value (CLTV) of a customer. CLTV is a key metric used by companies to see how much they will profit or loss from a customer in the long-term. Using this metric a company can choose to further spend or not on the customer for targeted ads, etc.
Lets start implementing the project in the next section.
Now lets get straight into the project configurations. Firstly, we need to download the Online retail dataset from UCI Machine Learning Repository. ZenML is not supported on windows, so either we need to use linux(WSL in Windows) or macos. Next download the requirements.txt. Now let us proceed to the terminal for few configurations.
# Make sure you have Python 3.10 or above installed
python --version
# Make a new Python environment using any method
python3.10 -m venv myenv
# Activate the environment
source myenv/bin/activate
# Install the requirements from the provided source above
pip install -r requirements.txt
# Install the Zenml server
pip install zenml[server] == 0.66.0
# Initialize the Zenml server
zenml init
# Launch the Zenml dashboard
zenml up
Now simply login into the ZenML dashboard with the default login credentials (No Password Required).
Congratulations you have successfully completed the Project Configurations.
Now its time to get our hands dirty with the data. We will create a jupyter notebook for analysing our data.
Pro tip : Do your own analysis without following me.
Or you can just follow along with this notebook where we have created different data analysis methods to use in our project.
Now, assuming you have performed your share of data analysis, lets jump straight to the spicy part.
For increasing Modularity and Reusablity of our code the @step decorator is used from ZenML which organize our code to pass into the pipelines hassle free reducing the chances of error.
In our Source folder we will write methods for each step before initializing them. We we follow System Design Patterns for each of our methods by creating an abstract method for the strategies of each methods(data ingestion, data cleaning, feature engineering , etc.)
Sample of the code for ingest_data.py
import logging
import pandas as pd
from abc import ABC, abstractmethod
# Setup logging configuration
logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s")
# Abstract Base Class for Data Ingestion Strategy
# ------------------------------------------------
# This class defines a common interface for different data ingestion strategies.
# Subclasses must implement the `ingest` method.
class DataIngestionStrategy(ABC):
@abstractmethod
def ingest(self, file_path: str) -> pd.DataFrame:
"""
Abstract method to ingest data from a file into a DataFrame.
Parameters:
file_path (str): The path to the data file to ingest.
Returns:
pd.DataFrame: A dataframe containing the ingested data.
"""
pass
# Concrete Strategy for XLSX File Ingestion
# -----------------------------------------
# This strategy handles the ingestion of data from an XLSX file.
class XLSXIngestion(DataIngestionStrategy):
def __init__(self, sheet_name=0):
"""
Initializes the XLSXIngestion with optional sheet name.
Parameters:
sheet_name (str or int): The sheet name or index to read, default is the first sheet.
"""
self.sheet_name = sheet_name
def ingest(self, file_path: str) -> pd.DataFrame:
"""
Ingests data from an XLSX file into a DataFrame.
Parameters:
file_path (str): The path to the XLSX file.
Returns:
pd.DataFrame: A dataframe containing the ingested data.
"""
try:
logging.info(f"Attempting to read XLSX file: {file_path}")
df = pd.read_excel(file_path,dtype={'InvoiceNo': str, 'StockCode': str, 'Description':str}, sheet_name=self.sheet_name)
logging.info(f"Successfully read XLSX file: {file_path}")
return df
except FileNotFoundError:
logging.error(f"File not found: {file_path}")
except pd.errors.EmptyDataError:
logging.error(f"File is empty: {file_path}")
except Exception as e:
logging.error(f"An error occurred while reading the XLSX file: {e}")
return pd.DataFrame()
# Context Class for Data Ingestion
# --------------------------------
# This class uses a DataIngestionStrategy to ingest data from a file.
class DataIngestor:
def __init__(self, strategy: DataIngestionStrategy):
"""
Initializes the DataIngestor with a specific data ingestion strategy.
Parameters:
strategy (DataIngestionStrategy): The strategy to be used for data ingestion.
"""
self._strategy = strategy
def set_strategy(self, strategy: DataIngestionStrategy):
"""
Sets a new strategy for the DataIngestor.
Parameters:
strategy (DataIngestionStrategy): The new strategy to be used for data ingestion.
"""
logging.info("Switching data ingestion strategy.")
self._strategy = strategy
def ingest_data(self, file_path: str) -> pd.DataFrame:
"""
Executes the data ingestion using the current strategy.
Parameters:
file_path (str): The path to the data file to ingest.
Returns:
pd.DataFrame: A dataframe containing the ingested data.
"""
logging.info("Ingesting data using the current strategy.")
return self._strategy.ingest(file_path)
# Example usage
if __name__ == "__main__":
# Example file path for XLSX file
# file_path = "../data/raw/your_data_file.xlsx"
# XLSX Ingestion Example
# xlsx_ingestor = DataIngestor(XLSXIngestion(sheet_name=0))
# df = xlsx_ingestor.ingest_data(file_path)
# Show the first few rows of the ingested DataFrame if successful
# if not df.empty:
# logging.info("Displaying the first few rows of the ingested data:")
# print(df.head())
pass csv
We will follow this pattern for creating rest of the methods. You can copy the codes from the given Github repository.
After Writing all the methods, it’s time to initialize the ZenML steps in our Steps folder. Now all the methods we have created till now, will be used in the ZenML steps accordingly.
Sample code of the data_ingestion_step.py :
import os
import sys
sys.path.append(os.path.dirname(os.path.dirname(__file__)))
import pandas as pd
from src.ingest_data import DataIngestor, XLSXIngestion
from zenml import step
@step
def data_ingestion_step(file_path: str) -> pd.DataFrame:
"""
Ingests data from an XLSX file into a DataFrame.
Parameters:
file_path (str): The path to the XLSX file.
Returns:
pd.DataFrame: A dataframe containing the ingested data.
"""
# Initialize the DataIngestor with an XLSXIngestion strategy
ingestor = DataIngestor(XLSXIngestion())
# Ingest data from the specified file
df = ingestor.ingest_data(file_path)
return df
We will follow the same pattern as above for creating rest of the ZenML steps in our project. You can copy them from here.
Wow! Congratulations on creating and learning one of the most important parts of MLOps. It’s okay to get a little bit of overwhelmed since it’s your first time. Don’t take too much stress as everything will be make sense when you will run your first production grade ML Model.
Its time to build our pipelines. No, not to carry water or oil. Pipelines are series of steps organized in a specific order to form our complete machine learning workflow. The @pipeline decorator is used in ZenML to specify a Pipeline that will contain the steps we created above. This approach makes sure that we can use the output of one step as an input for the next step.
Here is our training_pipeline.py :
#import csvimport os
import sys
sys.path.append(os.path.dirname(os.path.dirname(__file__)))
from steps.data_ingestion_step import data_ingestion_step
from steps.handling_missing_values_step import handling_missing_values_step
from steps.dropping_columns_step import dropping_columns_step
from steps.detecting_outliers_step import detecting_outliers_step
from steps.feature_engineering_step import feature_engineering_step
from steps.data_splitting_step import data_splitting_step
from steps.model_building_step import model_building_step
from steps.model_evaluating_step import model_evaluating_step
from steps.data_resampling_step import data_resampling_step
from zenml import Model, pipeline
@pipeline(model=Model(name='CLTV_Prediction'))
def training_pipeline():
"""
Defines the complete training pipeline for CLTV Prediction.
Steps:
1. Data ingestion
2. Handling missing values
3. Dropping unnecessary columns
4. Detecting and handling outliers
5. Feature engineering
6. Splitting data into train and test sets
7. Resampling the training data
8. Model training
9. Model evaluation
"""
# Step 1: Data ingestion
raw_data = data_ingestion_step(file_path='data/Online_Retail.xlsx')
# Step 2: Drop unnecessary columns
columns_to_drop = ["Country", "Description", "InvoiceNo", "StockCode"]
refined_data = dropping_columns_step(raw_data, columns_to_drop)
# Step 3: Detect and handle outliers
outlier_free_data = detecting_outliers_step(refined_data)
# Step 4: Feature engineering
features_data = feature_engineering_step(outlier_free_data)
# Step 5: Handle missing values
cleaned_data = handling_missing_values_step(features_data)
# Step 6: Data splitting
train_features, test_features, train_target, test_target = data_splitting_step(cleaned_data,"CLTV")
# Step 7: Data resampling
train_features_resampled, train_target_resampled = data_resampling_step(train_features, train_target)
# Step 8: Model training
trained_model = model_building_step(train_features_resampled, train_target_resampled)
# Step 9: Model evaluation
evaluation_metrics = model_evaluating_step(trained_model, test_features, test_target)
# Return evaluation metrics
return evaluation_metrics
if __name__ == "__main__":
# Run the pipeline
training_pipeline()
Now we can run the training_pipeline.py to train our ML model in a single click. You can check the pipeline in your zenml dashboard :
We can check our Model details and also train multiple models and compare them in the MLflow dashboard by running the following code in the terminal.
mlflow ui
Next we will create the deployment_pipeline.py
import os
import sys
sys.path.append(os.path.dirname(os.path.dirname(__file__)))
from zenml import pipeline
from zenml.client import Client
from zenml.integrations.mlflow.steps import mlflow_model_deployer_step
from steps.model_deployer_step import model_fetcher
@pipeline
def deploy_pipeline():
"""Deployment pipeline that fetches the latest model from MLflow.
"""
model_uri = model_fetcher()
deploy_model = mlflow_model_deployer_step(
model_name="CLTV_Prediction",
model = model_uri
)
if __name__ == "__main__":
# Run the pipeline
deploy_pipeline()
As we run the deployment pipeline we will get a view like this in our ZenML dashboard:
Congratulations you have deployed the best model using MLFlow and ZenML in your local instance.
Our next step is to create a Flask app that will project our Model to the end-user. For that we have to create an app.py and an index.html within the templates folder. Follow the below code to create the app.py:
from flask import Flask, request, render_template, jsonify
import pickle
"""
This module implements a Flask web application for predicting Customer Lifetime Value (CLTV) using a pre-trained model.
Routes:
/: Renders the home page of the customer lifecycle management application.
/predict: Handles POST requests to predict customer lifetime value (CLTV).
Functions:
home(): Renders the home page of the application.
predict(): Collects input data from an HTML form, processes it, and uses a pre-trained model to predict the CLTV.
The prediction result is then rendered back on the webpage.
Attributes:
app (Flask): The Flask application instance.
model: The pre-trained model loaded from a pickle file.
Exceptions:
If there is an error loading the model or during prediction, an error message is printed or returned as a JSON response.
"""
app = Flask(__name__)
# Load the pickle model
try:
with open('models/xgbregressor_cltv_model.pkl', 'rb') as file:
model = pickle.load(file)
except Exception as e:
print(f"Error loading model: {e}")
@app.route("/")
def home():
"""
Renders the home page of the customer lifecycle management application.
Returns:
Response: A Flask response object that renders the "index.html" template.
"""
return render_template("index.html")
@app.route("/predict", methods=["POST"]) #Handle POST requests to the /predict endpoint to predict customer lifetime value (CLTV).
def predict():
"""
This function collects input data from an HTML form, processes it, and uses a pre-trained model
to predict the CLTV. The prediction result is then rendered back on the webpage.
Form Data:
frequency (float): The frequency of purchases.
total_amount (float): The total amount spent by the customer.
avg_order_value (float): The average value of an order.
recency (int): The number of days since the last purchase.
customer_age (int): The age of the customer.
lifetime (int): The time difference between 1st purchase and last purchase.
purchase_frequency (float): The frequency of purchases over the customer's lifetime.
Returns:
Response: A rendered HTML template with the prediction result if successful.
Response: A JSON object with an error message and a 500 status code if an exception occurs.
"""
try:
# Collect input data from the form
input_data = [
float(request.form["frequency"]),
float(request.form["total_amount"]),
float(request.form["avg_order_value"]),
int(request.form["recency"]),
int(request.form["customer_age"]),
int(request.form["lifetime"]),
float(request.form["purchase_frequency"]),
]
# Make prediction using the loaded model
predicted_cltv = model.predict([input_data])[0]
# Render the result back on the webpage
return render_template("index.html", prediction=predicted_cltv)
except Exception as e:
# If any error occurs, return the error message
return jsonify({"error": str(e)}), 500
if __name__ == "__main__":
app.run(debug=True)
To create the index.html file, follow the below codes :
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>CLTV Prediction</title>
<style>
body {
font-family: Arial, sans-serif;
margin: 20px;
background-color: #f9f9f9;
}
h1 {
text-align: center;
}
form {
max-width: 600px;
margin: 0 auto;
background-color: #fff;
padding: 20px;
border-radius: 10px;
box-shadow: 0 0 10px rgba(0, 0, 0, 0.1);
}
label {
font-weight: bold;
margin-bottom: 8px;
display: block;
}
input[type="number"] {
width: 100%;
padding: 10px;
margin-bottom: 15px;
border-radius: 5px;
border: 1px solid #ddd;
}
button {
width: 100%;
padding: 10px;
background-color: #4CAF50;
color: white;
border: none;
border-radius: 5px;
font-size: 16px;
}
button:hover {
background-color: #45a049;
}
.prediction {
margin-top: 20px;
font-size: 18px;
text-align: center;
font-weight: bold;
color: #333;
}
</style>
</head>
<body>
<h1>Enter Customer Data for CLTV Prediction</h1>
<form action="/predict" method="post">
<label for="frequency">Total No. of Orders Till Date:</label>
<input type="number" id="frequency" name="frequency" required><br>
<label for="total_amount">Total Amount From Orders ($):</label>
<input type="number" step="0.01" id="total_amount" name="total_amount" required><br>
<label for="avg_order_value">Avg Value of Orders ($):</label>
<input type="number" step="0.01" id="avg_order_value" name="avg_order_value" required><br>
<label for="recency">No. of Days Since a Customer Made Their Most Recent Purchase:</label>
<input type="number" id="recency" name="recency" required><br>
<label for="customer_age">No. of Days Since The Customer is Associated to Your Company:</label>
<input type="number" id="customer_age" name="customer_age" required><br>
<label for="lifetime">No. of Days Customer has been Inactive:</label>
<input type="number" id="lifetime" name="lifetime" required><br>
<label for="purchase_frequency">Weekly Avg Purchase Frequency:</label>
<input type="number" step="0.01" id="purchase_frequency" name="purchase_frequency" required><br>
<button type="submit">Predict CLTV</button>
</form>
{% if prediction %}
<div class="prediction">
<h2>Predicted CLTV: {{ prediction }}</h2>
</div>
{% endif %}
</body>
</html>
Your app.py should look like this after execution :
Now the last step is to commit these changes in your github repository and deploy the model online on any cloud server, for this project we will deploy the app.py on a free render server and you can do so too.
Go to Render.com and connect your github repository of the project to render.
That’s it. You have successfully created your first MLOps project. Hope you enjoyed it!
MLOps has become an indispensable practice in managing the complexities of machine learning workflows, from data ingestion to model deployment. By leveraging Zenml, an open-source MLOps framework, we streamlined the process of building, training, and deploying a production-grade ML model for Customer Lifetime Value (CLTV) prediction. Through modular coding, robust pipelines, and seamless integrations, we demonstrated how to create an end-to-end project efficiently. As businesses increasingly rely on AI-driven solutions, frameworks like ZenML empower teams to maintain scalability, reproducibility, and performance with minimal manual intervention.
A. MLOps (Machine Learning Operations) streamlines the ML lifecycle by automating processes like data ingestion, model training, deployment, and monitoring, ensuring efficiency and scalability.
A. ZenML is an open-source MLOps framework that simplifies the development, deployment, and management of machine learning workflows with modular and reusable code.
A. ZenML is not directly supported on Windows but can be used with WSL (Windows Subsystem for Linux).
A. Pipelines in ZenML define a sequence of steps, ensuring a structured and reusable workflow for machine learning projects.
A. The Flask app serves as a user interface, allowing end-users to input data and receive predictions from the deployed ML model.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.