Build an End to End Image Classification/Recognition Application

Ekta Last Updated : 20 Nov, 2020
5 min read

This article was published as a part of the Data Science Blogathon.

Introduction

In the recent years, face recognition applications have been developed on a much larger scale. Image classification and recognition has evolved and is being used at a number of places. I recently read an article where a face recognition application has been deployed at one of the airports for a completely automated check in process.

Image Classification

This will alleviate the need for manual intervention and provide a seamless end to end check in process via technology. It looks like a magical application for normal human beings but I will be talk about what is required for you to build an application of this kind on your own mobile phone.

 

 

Applications

  1. Face Recognition – Phone cameras use face recognition for unlocking the phone. Face recognition systems could be deployed at entry gates of office buildings.
  2. Image Classification – It is used for distinguishing between multiple image sets. Industries like automobile, retail, gaming etc. are using this for multiple purposes.
  3. Image Recognition Security companies use image recognition for detecting various things in bags at the airports, image scanners etc.

Steps to Build the App

  • Obtain the Data
  • Data preparation
  • Data Modelling
  • Design the User Interface
  • Integrate User Interface and Modelling

Obtain the Data

Data would be in the form of images i.e. pictures. Pictures are a matrix of pixels. Images would be required in a larger number for building the entire end to end application. The data will be either available inside the organization itself or it will have to be obtained from the open internet. Depending on the kind of application, the kind of data that is required will differ. If its a face recognition application, we can even create data by collecting images from the various people. If the images are to be obtained from the open internet, we can scrape the images from the web.

The images captured need to be of a high resolution and can be slightly distorted. There can be some amount of noise present in the images so that the algorithm is able to classify the images properly.

Example of web scraping images on a web page –

from selenium import webdriver

options = webdriver.ChromeOptions()
options.add_argument('--ignore-certificate-errors')
options.add_argument("--test-type")
options.binary_location = "/usr/bin/chromium"
driver = webdriver.Chrome(chrome_options=options)

driver.get('https://imgur.com/')

images = driver.find_elements_by_tag_name('img')
for image in images:
    print(image.get_attribute('src'))

driver.close()

 

Data Preparation

  • The images need to be resized so that all images are of the same size
  • The images can be sharp with a high resolution as well as a bit blurry and noisy
  • Transformation operations like translation, rotation and scaling should be applied so that the images captured are present in all angles
  • Images can be distorted or sheared so that it generalizes well
  • Introduce noise in the images if not present
  • A uniform distribution of the number of images should be present in each of the classes

“If Kindle is upgraded with face recognition and biometric sensors, it can know what made you laugh, what made you sad and what made you angry. Soon, books will read you while you are reading them.”
Yuval Noah Harari
– Author of Sapiens

Code for resizing an image-

img = cv2.imread('/home/img/python.png', cv2.IMREAD_UNCHANGED)
print('Original Dimensions : ',img.shape)
scale_percent = 60
width = int(img.shape[1] * scale_percent / 100)
height = int(img.shape[0] * scale_percent / 100)
dim = (width, height)
# resize image
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)

Data Modelling

Once all the images have been obtained, place them into proper folder for each of the classes. Ensure that there is a proper distribution of images for the training, validation and the test datasets. For image classification and recognition we will have to use neural networks. The convolutional neural network architecture fits best for images as they work with matrices.

Convolutional Neural networks have different layers which aid in mathematical operations which are performed on images. The layers include Convolution layer, Pooling layer, Batch Normalization Layer, Activation functions and the fully connected layers. Transfer learning gives you the capability to use the pre trained network model architectures which work well with the standard dataset images. So start by writing your own network but you will observe that pre trained networks will give you a far better performance.

Start with some of the basic pre-trained models like:

  • VGG16
  • Inception
  • Xception
  • MobileNet
  • ResNet50

You can use Tensorflow or Keras libraries for using these models which have their implementations within the library. This will make it more easier for you to change the parameters for the different layers of the architecture. You can work around hyperparameter tuning to improve the performance. While training the models make sure that you save the coefficient values or the weights for the models. These values that you save can be used to predict for the future images that you will provide to your application.

VGG16 Model Code-

image_size=224
from keras.applications import VGG16
from keras import models
from keras import layers
from keras import optimizers
#Load the VGG model
vgg_conv = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
for layer in vgg_conv.layers[:-4]:
    layer.trainable = False
for layer in vgg_conv.layers:
    print(layer, layer.trainable)
model = models.Sequential()
# Add the vgg convolutional base model
model.add(vgg_conv)
# Add new layers
model.add(layers.Flatten())
model.add(layers.Dense(1024, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(29, activation='softmax'))
# Show a summary of the model. Check the number of trainable parameters
model.summary()

Keras library gives you an easy way to save the model coefficients using :

model.save('filename.h5')

Design the User Interface

Once the model is ready to use, you need to work on the user interface. If you are designing an Android application, you can design the user interface with the help of Kotlin or Flutter. The user interface should be simple to read and interpret. It should be designed in such a way that it fulfills the main objective of the application.

If a web application is to be designed, Flask or Django could be used for the same purpose. The GUI could be designed using the Python libraries like Tkinter etc.

 

Integrate the User Interface and Modelling

For Android apps, Flutter lets you integrate your classification models with the help of a library called as Tensorflow Lite. The tensorflow lite implementation just needs two files for the image classification i.e. the class labels text file and the model coefficients or weights file. Once these two files are placed in the folder structure, the android application will be complete and ready to be tested. The camera widget that is created using Flutter can be used for taking the input image.

Code for including the two files –

loadModel() async {
  await Tflite.loadModel(
    model: "assets/model_unquant.tflite",
    labels: "assets/labels.txt",
  );
}

Here the .tflite file is the coefficients file which is created from the model and labels.txt is the names of image classes separated by a new line. Embed this in the android structure.

For web apps, Flask lets you integrate the Tensorflow library and lets you use the model weights for making the right kind of prediction on the input image.

By following this process, step by step you will be able to build your own classification model right away.

Responses From Readers

Clear

Ronik Dedhia
Ronik Dedhia

Rrally insighful 💯.

Abhishek
Abhishek

Insightful article !

Devanshi
Devanshi

Informative

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details