How to Deploy a Machine Learning Model using Flask?

shikha.sen . Last Updated : 19 Mar, 2024
14 min read

Introduction

Deploying machine learning models with Flask offers a seamless way to integrate predictive capabilities into web applications. Flask, a lightweight web framework for Python, provides a simple yet powerful environment for serving machine learning models. In this article, we explore the process of deploying machine learning models using Flask, enabling developers to leverage the full potential of their predictive algorithms in real-world applications.

What is Model Deployment and Why is it Important?

Model deployment in machine learning integrates a model into an existing production environment, enabling it to process inputs and generate outputs. This step is crucial for broadening the model’s reach to a wider audience. For instance, if you’ve developed a sentiment analysis model, deploying it on a server allows users worldwide to access its predictions. Transitioning from a prototype to a fully functional application makes machine learning models valuable to end-users and systems.

Machine learning model

Deploying machine learning models can not be understated!

While accurate model building and training are vital, their true worth lies in real-world application. Deployment facilitates this by applying models to new, unseen data, bridging the gap between historical performance and real-world adaptability. It ensures that the efforts put into data collection, model development, and training translate into tangible benefits for businesses, organizations, or the public.

What are the Lifecycle Stages ?

  • Develop Model: Start by developing and training your machine learning model. This includes data pre-processing, feature engineering, model selection, training, and evaluation.
  • Flask App Development (API Creation): Create a Flask application that will serve as the interface to your machine learning model. This involves setting up routes that will handle requests and responses.
  • Test & Debugging (Localhost): Test the Flask application on your local development environment. Debug any issues that may arise.
  • Integrate Model with Flask App: Incorporate your trained machine learning model into the Flask application. This typically involves loading the model and making predictions based on input data received through the Flask endpoints.
  • Flask App Testing & Optimization: Further test the Flask application to ensure it works as expected with the integrated model. Optimize performance as needed.
  • Deploy to Production: Once testing and optimisation are complete, Deploy the Flask application to a production environment. This could be on cloud platforms like Heroku, AWS, or GCP.
+-----------------+       +------------------+       +-------------------+
|                 |       |                  |       |                   |
|  Develop Model  +------>+  Flask App Dev   +------>+  Test & Debugging |
|                 |       | (API Creation)   |       | (Localhost)       |
+--------+--------+       +---------+--------+       +---------+---------+
         |                          |                          |
         |                          |                          |
         |                          |                          |
+--------v--------+       +---------v--------+       +---------v---------+
|                 |       |                  |       |                   |
| Model Training  |       | Integrate Model  |       | Flask App Testing |
| & Evaluation    |       | with Flask App   |       | & Optimization    |
+--------+--------+       +---------+--------+       +---------+---------+
         |                          |                          |
         |                          |                          |
         |                          |                          |
+--------v--------+       +---------v--------+       +---------v---------+
|                 |       |                  |       |                   |
| Model Selection |       | Flask App        |       | Deploy to         |
| & Optimization  |       | Finalization     |       | Production        |
|                 |       |                  |       | (e.g., Heroku,    |
+-----------------+       +------------------+       | AWS, GCP)         |
                                                     |                   |
                                                     +-------------------

What are the Platforms to Deploy ML Models?

There are many platforms available for deploying machine learning models. Below are given some examples: 

  • Django: A Python-based framework that offers a lot of  built-in features making it suitable for larger applications with complex requirements.
  • FastAPI: A modern, fast (high-performance) web framework for building APIs with Python 3.6+ based on standard Python type hints. It’s gaining popularity for its speed and ease of use, especially for deploying machine learning models.
  • TensorFlow Serving: Specifically designed for deploying TensorFlow models, this platform provides a flexible, high-performance serving system for machine learning models, designed for production environments.
  • AWS SageMaker: A fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning models quickly. SageMaker handles much of the underlying infrastructure and provides scalable model deployment.
  • Azure Machine Learning: A cloud service for accelerating and managing the ML project lifecycle, including model deployment to production environments.

In this article we are going to use Flask to deploy a machine learning model.

What is Flask?

Flask, a lightweight WSGI web application framework in Python, has become a popular choice for deploying machine learning models. Its simplicity and flexibility make it an attractive option for data scientists and developers alike. Flask allows for quick setup of web servers to create APIs through which applications can communicate with the deployed models.

This means that Flask can serve as the intermediary, receiving data from users, passing it to the model for prediction, and then sending the response back to the user. Its minimalist design is particularly suited for ML deployments where the focus is on making a model accessible without the overhead of more complex frameworks. Moreover, Flask’s extensive documentation and supportive community further ease the deployment process.

Which Platform to use to Deploy ML Models?

The choice among different platforms should be based on the specific needs of your project, including how complex your application is, your preferred programming language, scalability needs, budget constraints, and whether you prefer a cloud-based or on-premise solution.

For beginners or small projects, starting with Flask or FastAPI can be a good choice due to their simplicity and ease of use.

For larger, enterprise-level deployments, considering a managed service like AWS SageMaker, Azure Machine Learning, or Google AI Platform can provide more robust infrastructure and scalability options.

Flask often stands out for its simplicity and flexibility, making it an excellent choice for small to medium-sized projects or as a starting point for developers new to deploying machine learning models.

ML Model: Predict the Sentiment of the Texts/Tweets

Before performing the latter step of deploying, we first need to make a machine learning model. The model we are building aims to predict the sentiment of the texts/tweets. 

Preparing a sentiment analysis model is an important step before deployment, involving several stages from data collection to model training and evaluation. This process lays the foundation for the model’s performance once deployed using Flask or any other framework. Understanding this workflow is essential for anyone looking to deploy their machine learning models effectively.

Steps to Deploy a Machine Learning Model using Flask

The following steps are required: 

Step 1: Data Collection and Preparation

The first step in developing a sentiment analysis model is gathering a suitable dataset. The dataset should consist of text data labeled with sentiments, typically as positive, negative, or neutral. This includes removing unnecessary characters, tokenization, and possibly lemmatization or stemming to reduce words to their base or root form. This cleaning process ensures that the model learns from relevant features. We are using the tweets data which is available over the net. 

In the problem of classifying the tweet/text, we are using the data storewhich contains 7920 values, the tweet column – which contains all the tweets and and a label column with values 0 and 1, where 0 stands for negative and 1 stands for positive. 

Data Collection and Preparation
def preprocess_text(text):
    # Convert text to lowercase
    text = text.lower()
   
    # Remove numbers and punctuation
    text = re.sub(r'\d+', '', text)
    text = text.translate(str.maketrans('', '', string.punctuation))
   
    # Tokenize text
    tokens = word_tokenize(text)
   
    # Remove stopwords
    stop_words = set(stopwords.words('english'))
    tokens = [word for word in tokens if word not in stop_words]
   
    # Lemmatize words
    lemmatizer = WordNetLemmatizer()
    tokens = [lemmatizer.lemmatize(word) for word in tokens]
   
    # Join tokens back into a string
    processed_text = ' '.join(tokens)
   
    return processed_text#import csv

Step 2: Feature Extraction

After preprocessing, the next step is feature extraction, which transforms text into a format that a machine learning model can understand. Traditional methods like Bag of Words (BoW) or Term Frequency-Inverse Document Frequency (TF-IDF) are commonly used.

These techniques convert text into numerical vectors by counting word occurrences or weighing the words based on their importance in the dataset, respectively. For our model, we are only taking one input feature and corresponding output feature.

Model Architecture

The choice of model architecture depends on the complexity of the task and the available computational resources. For simpler projects, traditional machine learning models like Naive Bayes, Logistic Regression, or Support Vector Machines (SVMs) might suffice. These models, while straightforward, can achieve impressive results on well-preprocessed data. For more advanced sentiment analysis tasks, deep learning models such as CNNs or Recurrent Neural Networks RNNs, including LSTM are preferred due to their ability to understand context and sequence in text data. We are using logistic regression for sentiment classification problem.

Training the Model

Model training involves feeding the preprocessed and vectorized text data into the chosen model architecture. This step is iterative, with the model learning to associate specific features (words or phrases) with particular sentiments. During training, it’s crucial to split the dataset into training and validation sets to monitor the model’s performance and avoid overfitting.

The data for training after preprocessing and later we are applying tf-idf for vectorisation.

Training the Model

Evaluation Metrics

The performance of the sentiment analysis model is evaluated using metrics such as accuracy, precision, recall, and F1 score. Accuracy measures the overall correctness of the model across all sentiment classes, while precision and recall focus on the model’s performance in identifying positive cases. 

Evaluation Metrics

Saving the Model for Deployment

Once the model is trained and evaluated, it’s important to save it for deployment. In Python, libraries like Pickle or job lib can serialise the model and save it to a file. This file can then be loaded into a Flask application to make predictions. Saving the model includes not just the architecture and learned weights but also the pre-processing and feature extraction steps, ensuring that input data can be appropriately handled during deployment.

We can simply perform all the steps in a notebook, like we are doing here. You can experiment with your code to further improve the performance. You can do n number of operations before dumping your model and put it over server.

import pickle

# Assuming `pipeline` is your trained pipeline that includes
# the TF-IDF vectorizer and logistic regression model
with open('models/LRmodel.pkl', 'wb') as file:
    pickle.dump(pipeline, file)#import csv

The link to repo which contains all the files and folders is here. You must look at the master branch and no the main branch, as the project is in master branch. You can clone the model from here and simply run and make changes as per your requirements.

To clone the model, you can use the command:

git clone -b master --single-branch https://github.com/Geek-shikha/Flask_model_sentiment_analysis.git

Step 3: Building the Flask Application

After saving the final model in a pickle file. We can simply start building the flask application. 

Here’s a step-by-step guide to deploying your sentiment analysis model with Flask.

Create Flask Application: Organise the project directory. The project directory might look like this: 

project/

  │   ├── __init__.py

  │   ├── templates/

  │   │   ├── index.html

  │   └── static/

  │       ├── css/- style.css

  │       └── js/

  ├── venv/

   |___ sentiment.ipynb

  ├── requirements.txt

   |___ preprocess.py

   |___ models/

          |___LRmodel.pickle

  └── app.py

Create a Folder with Suitable Project Name

The first step begins by creating a folder with some name that is suitable for your project, or you can simply name the folder as project, here we are keeping “sentiment_analysis”.  As you can see from the directory diagram above, there are multiple files and folders. 

Directory Structure for a Flask Machine Learning Model Project

The explanation of the directory structure for a Flask machine learning model project:

Folders in the directory:

templates/: This folder contains HTML files that the Flask app will render and serve to the client. Flask uses Jinja2 templating engine for rendering templates.

css/: Contains CSS files that define the styling of the web application.

  • style.css: This specific stylesheet file contains custom styles to make the web app visually appealing.

venv/: A directory for the virtual environment where Flask and other Python dependencies are installed. Keeping a virtual environment is best practice for managing project-specific dependencies separately from the global Python environment.

static/: This directory stores static files like CSS, JavaScript, and images. Flask serves these files to be used by the HTML templates.

models/: A directory for storing machine learning model files.

Files in the Directory:

init.py : This file initializes the Flask application and defines the Flask app instance.The __init__.py file can be empty or can contain initialization code if needed. In this case, since we are not creating a package and there are no specific initialization requirements, the __init__.py file can be left empty. Its presence simply indicates that the app directory is a Python package.

index.html: This HTML file is the main page of the web application. It contains the user interface where users input data for sentiment analysis, and where results are displayed.

sentiment.ipynb: This file is a Jupyter Notebook named sentiment.ipynb. It contains the code for training and evaluating the sentiment analysis model using machine learning. Jupyter Notebooks are often used for exploratory data analysis and prototyping machine learning models. It’s useful for development and documentation but not directly involved in the Flask application.

preprocess.py: This Python script contains functions for preprocessing input data before it’s fed into the logistic regression model for sentiment analysis. This include cleaning text, removing stopwords, vectorization, etc.

LRmodel.pickle: A pickled file containing the trained logistic regression model. Pickling is a way to serialize and save a Python object to disk, allowing you to load the model in your Flask application for making predictions.

app.py: The main Python script for the Flask application. It initializes the Flask app and defines routes for handling web requests. It likely includes routes for rendering the HTML template, receiving input from users, preprocessing that input with preprocess.py, loading the logistic regression model from LRmodel.pickle, making predictions, and then sending the results back to the client.

Creating a Virtual Environment

Now After understanding the project directory, lets understand why should we create a virtual environment.

After creating your project folder, the foremost step is to ensure Python is installed on the system. And then creating  a virtual environment to manage dependencies. Creating a virtual environment is essential for maintaining a clean and reproducible development environment, ensuring project stability, and facilitating collaboration and deployment. It’s considered a best practice in Python development.

Why Create Virtual Environement?

Virtual environments allow you to isolate your project dependencies from other projects and the system-wide Python installation. This ensures that your project can run with its specific set of dependencies(or libraries) without interfering with other projects or the system environment.

It also helps manage dependencies for your project. You can install specific versions of libraries/packages required for your project without affecting other projects or the global Python environment. This ensures that your project remains stable and reproducible across different environments. For instance, you have two projects in your local system and require different versions of tensorflow for each project, so creating two separate environments allows you to keep separate versions as per the requirements and to avoid conflicts. 

  • Version Control: Including the virtual environment directory in your version control system (e.g., Git) allows you to share your project with others while ensuring that they can easily replicate your development environment. This makes collaboration easier and helps avoid version conflicts between dependencies. When you clone this project, you can simply get the virtual environment as well. This virtual environment has been made in Linux. While cloning the project and running it on local, please take care of the virtual environment as per your Operating System.
  • Sandboxing: Virtual environments act as sandboxes where you can safely experiment with different versions of Python and libraries without affecting other projects or the system environment. This is particularly useful when testing new libraries or upgrading existing ones.

And virtual environments make your project more portable since they encapsulate all the dependencies needed to run your project. You can easily transfer your project to another machine or deploy it to a server without worrying about compatibility issues.

You can simply use vs code and navigate to the folder and open a terminal and use the command python -m venv {nameofvenv} and you will see one folder added to your project directory. You can install the specific version of python as well. 

Now lets look at the main file , app.py. Line by line-
from flask import Flask, render_template, request
import pickle
from pre_process import preprocess_text

These lines import necessary modules and functions:

  • Flask: To create an instance of the Flask application.
  • render_template: To render HTML templates.
  • request: To handle requests to the server.
  • pickle: To load the pre-trained logistic regression model from a file.
  • preprocess_text: it’s a custom function defined in pre_process.py, used to preprocess the input text (tweet) before feeding it to the model.
app = Flask(__name__)
  • This line creates a Flask application object which will be used to handle requests and responses.
with open('models/LRmodel.pkl', 'rb') as f:
    model = pickle.load(f)
@app.route('/', methods=['GET', 'POST'])
def index():
  sentiment = None
  
    #import csv

This line is a decorator that defines a route for the root URL ‘/’. The route handles both GET and POST requests.

Also defines a function named index() that will be executed when the route is accessed.  the sentiment variable initialises to None. This variable will hold the sentiment prediction result.

GET AND POST REQUESTS

In web development, HTTP (Hypertext Transfer Protocol) defines a set of request methods that indicate the desired action to be performed for a given resource. Two common request methods are GET and POST, which serve different purposes:

GET 
  • The GET method is used to request data from a specified resource.
  • Parameters are sent in the URL’s query string.
  • GET requests can be bookmarked, cached, and shared, as they are visible in the browser’s address bar.
  • GET requests are idempotent, meaning making the same request multiple times will produce the same result.
POST
  • The POST method is used to submit data to be processed to a specified resource.
  • Parameters are sent in the request body and are not visible in the URL.
  • POST requests are not bookmarked or cached by default, making them more secure for sending sensitive data.
  • POST requests are not idempotent, meaning making the same request multiple times may produce different results, especially if the request results in changes on the server side.
Summary

GET requests are used for retrieving data, while POST requests are used for submitting data to be processed. GET requests are suitable for retrieving information from the server, such as fetching web pages or API data, while POST requests are used for actions that modify server state, such as submitting form data or uploading files.

Implementation of POST Request
if request.method == 'POST':
            # Get the tweet text from the form
            tweet_text = request.form['tweet_text']
            print(tweet_text)
            # Preprocess the tweet text
            processed_text = preprocess_text(tweet_text)
            # Make predictions using the loaded model
            prediction = model.predict([processed_text])[0]
            # Determine the sentiment based on the prediction
            sentiment = 'Positive' if prediction == 1 else 'Negative'
            print(sentiment)
            

if request.method ==’POST’ this checks if the current request is a POST request, indicating that the user has submitted data through the form.

“request.form[‘tweet_text’]” : Retrieves the text entered in the form by the user (identified by ‘tweet_text’ in the form) from the request.

return render_template('index.html', sentiment=sentiment)

Renders the index.html template, passing the sentiment variable to it, which can then be displayed to the user.

if __name__ == '__main__':
    app.run(debug=True)

This block of code runs the Flask application when the script is executed directly (__name__ == ‘__main__’). The debug=True argument enables debug mode, which provides helpful error messages in the browser during development.

If the debug argument is set to False when running the Flask application, debug mode will be disabled.

Overall, setting debug=False is recommended for deploying Flask applications to production environments to ensure better security, performance, and error handling. However, during development, it’s often beneficial to set debug=True to take advantage of features like detailed error messages and automatic code reloading.

After setting up every file and folders, you just need to open terminal and activate your virtual environment and simply run the command : python app.py .You will have an output like the below picture , where you can provide the input and output will be given below it .

Machine Learning Model using Flask

After successfully creating and running your Flask project on your local system, the next step is to deploy it to a server so it’s accessible over the internet. Deploying a Flask application involves several steps, from preparing your application for deployment to choosing a deployment platform and finally going live.

Overview of the Process

Prepare Your Flask Application :

  • Ensure you properly structure your Flask application with all necessary files and folders.
  • List all required dependencies in a requirements.txt file to ensure completeness.

Set up a Server :

  • Choose a server provider (e.g., AWS, DigitalOcean, Heroku, etc.).
  • Set up a server instance (virtual machine or container) with the necessary resources (CPU, memory, storage, etc.).
  • Configure the server’s operating system (install updates, set up firewall rules, etc.).

Install Python and Dependencies :

  • Install Python on the server if it’s not already installed.
  • Create a virtual environment for your Flask application.
  • Activate the virtual environment and install dependencies from the requirements.txt file using pip.

Deploy Your Flask Application:

  • Transfer your Flask application files to the server (e.g., using SCP, FTP, Git, etc.).
  • Make sure your Flask application file (
  • Run your Flask application using a WSGI server like Gunicorn or uWSGI. You can do this manually or set up a process manager like Supervisor to manage the application process.
  • Configure the WSGI server to serve your Flask application on a specific port (usually port 80 for HTTP or port 443 for HTTPS).

Set up Domain and DNS :

  • Register a domain name for your Flask application (if you don’t have one already).
  • Configure DNS settings to point your domain to the IP address of your server.

Secure Your Application :

  • Set up SSL/TLS certificates to enable HTTPS for secure communication.
  • Configure firewall rules to restrict access to your application.

Monitor and Maintain :

  • Set up monitoring tools to monitor server performance, uptime, and traffic.
  • Regularly update your server’s operating system, Python, and dependencies to patch security vulnerabilities and ensure compatibility.

Conclusion

Deploying machine learning models with Flask enables seamless integration of predictive capabilities into web apps. Flask, a lightweight Python web framework, simplifies model serving, transitioning them from prototype to production. This process involves model development, feature extraction, testing, optimization, and deployment. Flask’s simplicity and flexibility make it ideal for small to medium projects, while larger deployments may benefit from managed services like AWS SageMaker or Azure Machine Learning. Overall, Flask empowers developers to drive tangible benefits in real-world applications.

involves a blend of statistics, data analysis, machine learning, and their related methods to understand and analyze actual phenomena with data. It encompasses a wide range of skills and activities, including but not limited.

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details