As a Machine learning engineer or a Data scientist, it is important to show your work to the intended public without any hassle. Although you might have created a very good model for the predictive analysis, if you fail to demonstrate the work that you have created to the public then your hard work is of no use. A person should be able to get the desired output when he/she is using your ML model for their purpose. Giving a bunch of codes to a naive person is an act of foolish and you don’t want to be one.
So, there should be a way out so that you get to display your work to the public. The answer to this problem is the deployment of your work in the form of web apps. This helps in displaying your work to the public and not getting deep down into the backend of the same. Once you deploy the code successfully then there is no need to access the backend and people can get their desired outcome by inputting values in the front end.
For deploying the Machine learning model we will be concentrating the things centered on Python programming language and the deployment tools for this will be Flask and Microsoft Azure. The main purpose is to create a web application that will run 24×7 hosted on a cloud-based server. So, without further wasting of time let’s start:
Note: Here we will be using the famous Iris flower dataset and we will be storing our work on Github and deploy the same to Azure via Github.
This article was published as a part of the Data Science Blogathon.
The first and foremost thing to do is to create a Machine learning model with the name model.py and then pickling the model in the local system using either Pickle or Joblib. So let’s see how to create a simple Machine learning model of the Iris flower dataset using Support Vector Machine Classification:
#importing the necessary libraries
from pyforest import *
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
import joblib
from sklearn.datasets import load_iris
import pandas as pd
from sklearn.datasets import load_iris
#loading the dataset into a pandas dataframe
data = load_iris()
data.feature_names
df= pd.DataFrame(data.data)
print(df.head())
#renaming the columns with the actual column names that is sepal and petal width and length
df.columns= data.feature_names
data.target_names
#inserting the target feature in the dataset
df["target"]= data.target
print(df)
#getting our X and y to feed it into the ML model
X= df.drop("target, axis=1)
y= df.target
#splitting the dataset into train and test
X_train,X_test,y_train,y_test= train_test_split(X,y, test_size=0.2, random_state=42)
#loading the SVC model by creating an object of the class
model= SVC()
#training the model
model.fit(X_train,y_train)
#making predictions
y_pred= model.predict(X_test)
#pickling the model
joblib.dump(model, "model.pkl")
c= [2,3,3,4]
from_jb= joblib.load("model.pkl")
from_jb.predict([c])
After creating the model.py file and pickling the same the next step is to create a Flask web app with the name app.py and to create this we need to follow the below-mentioned steps. Also, note that the model.py, model.pkl, and app.py should be present in the same directory of the computer. The reason for selecting Flask is that it is a very light web framework that helps in creating web apps with minimal lines of code.
Although there are many frameworks for Python for creating web apps like Django, Web2py, Grok, TurboGears, etc, still Flask helps us in creating apps with less involvement of time and also is a good tool for beginners who want to learn building web applications. Also, the framework depends completely on Python for coding related stuff rather than relying on other dependent tools. To learn the insights of this amazing library one must have good knowledge related to Python, a bit of HTML and CSS, and a Database management system if any kind of data-related work is involved. So, if you have the knowledge of these three things then you are ready to code in Flask.
#importing the necessary libraries for deployment
from flask import Flask, request, jsonify, render_template
import joblib
from pyforest import *
#naming our app as app
app= Flask(__name__)
#loading the pickle file for creating the web app
model= joblib.load(open("model.pkl", "rb"))
#defining the different pages of html and specifying the features required to be filled in the html form
@app.route("/")
def home():
return render_template("index.html")
#creating a function for the prediction model by specifying the parameters and feeding it to the ML model
@app.route("/predict", methods=["POST"])
def predict():
#specifying our parameters as data type float
int_features= [float(x) for x in request.form.values()]
final_features= [np.array(int_features)]
prediction= model.predict(final_features)
output= round(prediction[0], 2)
return render_template("index.html", prediction_text= "flower is {}".format(output))
#running the flask app
if __name__= "__main__"
app.run(debug=True)
Note: Please note that use any IDE of your choice say Sublime, VS Code, Atom, Spyder, etc. But, don’t use any IPython notebook for creating the web app because these notebooks do not support Flask. Flask only supports .py files and not .ipynb files.
Now, the next step is to create an HTML file as mentioned earlier that it is a prerequisite for creating a web app in Flask with the name index.html contained within folder templates under the same directory. To view this file just follow the below link:
The HTML form details are in the following link:
The HTML form we have created is a very simple one that contains only the feature names that are present in the dataset and the prediction output that we want the form to display. There is a primary button which when clicked gives the desired output. You can add various colors and background images also as per your desire to make the form more beautiful.
Once the 3 files are created the last step is to create a requirements.txt file that will contain all the necessary library details being used in the prediction. To create this we will be using Command Prompt and pipreqs library. Just open CMD under the same directory and type
pipreqs your path of the files
Once done your requirements.txt file is ready. Now you just need to upload the necessary files in Github under a fresh repository containing a README.md file, .gitignore file, and a license file. Upload your app.py, model.pkl, requirements.txt, and templates folder with index.html to the repository.
Note: If you don’t have certain libraries that are being used here then you can download the same using your Command Prompt or Anaconda Prompt with the help of pip.
pip install name of the library
Wait for the installation to finish. Once the installation finishes then you are good to go.
Follow these steps to deploy Azure cloud:
To initiate the deployment process on Azure, start by accessing portal.azure.com. Log in with your Microsoft account or create a new account if needed. The following steps guide you through deploying a Flask app from GitHub.
Configure the details of your Flask app within the Azure portal.
Finalize the configuration and prepare for deployment.
Connect your GitHub repository for seamless deployment.
Monitor the progress and access your deployed app.
Explore your deployed Flask app at https://simpleirisapp.azurewebsites.net.
So, this is how one can deploy any Machine learning and Deep learning app very easily to Azure and can share his/her work globally. Go try it yourself and create wonders.
A. Deploying an ML model with Flask on Azure involves creating a Flask web service, creating an Azure App Service, and configuring deployment settings using tools like Azure CLI.
A. Deploying a Flask API on Azure typically involves creating an Azure App Service, preparing the Flask application, and configuring deployment settings through Azure’s deployment options or CI/CD pipelines.
A. The best deployment for Flask depends on project requirements. Options include deploying on cloud platforms like Azure or AWS, using containerization with Docker, or utilizing serverless computing for scalability.
A. Deploying a Flask application to the cloud involves choosing a cloud platform (e.g., Azure, AWS, Google Cloud), creating a virtual machine or an App Service, and configuring the deployment settings based on the chosen platform.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.
Hallow analysts , this is very impressive. Iit took me just minutes and i already understand the process. That was great. If you can make a video article, it will greatly appreciated.