How to build a Face Mask Detector using RetinaNet Model!

Guest Blog Last Updated : 25 Aug, 2020
11 min read

Introduction

Face Mask Detector using RetinaNet

Object detection is a tremendously important field in computer vision needed for autonomous driving, video surveillance, medical applications, and many other fields.

We are grappling with a pandemic that’s operating at a never-before-seen scale. Researchers all over the globe are frantically trying to develop a vaccine or a cure for COVID-19 while doctors are just about keeping the pandemic from overwhelming the entire world. On the other hand, many countries have found social distancing, using masks & gloves a way to curb the situation a little.

I recently had an idea to apply my deep learning knowledge to help the current situation a little. In this article, I’ll introduce you to the implementation of RetinaNet with little background & working on it.

The cherry on top? We’ll build a “Face mask detector” using RetinaNet to help us in this ongoing pandemic. You can extrapolate the same idea to build an AI-enabled solution for your smart home. This AI-enabled solution would open the gate of your building to only people, who are wearing masks and gloves.

As the cost of drones is decreasing with time, we see a large spike in the generation of aerial data. So, you can use this RetinaNet model to detect different objects such as automobile vehicles (bikes, cars, etc.) or pedestrians in aerial images or maybe even in satellite images to solve your different business problems.

So, you see applications of object detection model is endless.

 

Table of Contents

  1. What is RetinaNet
  2. Need for RetinaNet
  3. The Architecture of RetinaNet
    1. Backbone Network
    2. Subnetwork for object Classification
    3. Subnetwork for object Regression
  4. Focal Loss
  5. Build Face mask detector using RetinaNet model
    1. Gather Data
    2. Create Dataset
    3. Model Training
    4. Model Testing
  6. Final Notes

 

What is RetinaNet: –

RetinaNet is one of the best one-stage object detection models that has proven to work well with dense and small scale objects. For this reason, it has become a popular object detection model that we use with aerial and satellite imagery.

 

 Need for RetinaNet: –

RetinaNet was introduced by Facebook AI Research to tackle the dense detection problem. It was needed to fill in for the imbalances and inconsistencies of the single-shot object detectors like YOLO and SSD while dealing with extreme foreground-background classes.

 

The Architecture of RetinaNet: –

In essence, RetinaNet is a composite network composed of:

  1. Backbone Network (i.e. Bottom-up pathway + Top down a pathway with lateral connections eg. ResNet + FPN)
  2. Subnetwork for object Classification
  3. Subnetwork for object Regression
Face Mask Detector using RetinaNet: RetinaNet Architecture

Image Source

For better understanding, Let’s understand each component of architecture separately-

  1. The backbone Network

    1. Bottom up pathway: – Bottom up pathway (eg. ResNet) is used for feature extraction. So, It calculates the feature maps at different scales, irrespective of the input image size.
    2. Top down pathway with lateral connections: – The top down pathway upsamples the spatially coarser feature maps from higher pyramid levels, and the lateral connections merge the top-down layers and the bottom-up layers with the same spatial size.Higher-level feature maps tend to have small resolution though semantically stronger. Therefore, more suitable for detecting larger objects; on the contrary, grid cells from lower-level feature maps have high resolution and hence are better at detecting smaller objects (see Fig. 64). So, with a combination of the top-down pathway and its lateral connections with bottom up the pathway, which do not require much extra computation, every level of the resulting feature maps can be both semantically and spatially strong. Hence this architecture is scale-invariant and can provide better performance both in terms of speed and accuracy.
  2. Subnetwork for object Classification

    A fully convolutional network (FCN) is attached to each FPN level for object classification.  As it’s shown in the diagram above, This subnetwork incorporates 3*3 convolutional layers with 256 filters followed by another 3*3 convolutional layer with K*A filters. Hence output feature map would be of size W*H*KA , where W & H are proportional to the width and height of the input feature map, and K & A are the numbers of object class and anchor boxes respectively.

    At last Sigmoid layer (not softmax) is used for object classification.

    And the reason for the last convolution layer to have KA filters is because, if there’re “A ” number of anchor box proposals for each position in feature map obtained from the last convolution layer then each anchor box has the possibility to be classified in K number of classes. So the output feature map would be of size KA channels or filters.

  3. Subnetwork for object Regression

    The regression subnetwork is attached to each feature map of the FPN in parallel to the classification subnetwork. The design of the regression subnetwork is identical to that of the classification subnet, except that the last convolutional layer is of size 3*3 with 4 filters resulting in an output feature map with the size of W*H*4A .

    Reason for last convolution layer to have 4 filters is because, in order to localize the class objects, regression subnetwork produces 4 numbers for each anchor box that predict the relative offset (in terms of center coordinates, width, and height) between the anchor box and the ground truth box. Therefore, the output feature map of the regression subnet has 4A filters or channels.

     

Focal Loss

Focal Loss (FL) is an improved version of Cross-Entropy Loss (CE)  that tries to handle the class imbalance problem by assigning more weights to hard or easily misclassified examples  (i.e. background with noisy texture or partial object or the object of our interest ) and to down-weight easy examples (i.e. Background objects).

So Focal Loss reduces the loss contribution from easy examples and increases the importance of correcting misclassified examples. Focal loss is just an extension of the cross-entropy loss function that would down-weight easy examples and focus training on hard negatives.

So to achieve these researchers have proposed-

1-  pt to the cross-entropy loss, with a tunable focusing parameter ≥0. RetinaNet object detection method uses an α-balanced variant of the focal loss, where α=0.25, γ=2 works the best.

 

So focal loss can be defined as –

Face Mask Detector using RetinaNet: Formula

The focal loss is visualized for several values of γ ∈[0,5], refer Figure 1. We shall note the following properties of the focal loss-

  1. When an example is misclassified and pt is small, the modulating factor is near 1 and does not affect the loss.
  2. As, pt→ 1, the factor goes to 0 and the loss for well-classified examples is down weighed.
  3. The focusing parameter γ smoothly adjusts the rate at which easy examples are down-weighted. As γ is increased, the effect of modulating factor increases likewise. (After a lot of experiments and trials, researchers have found γ =2 to work best)

Note:- when γ = 0 , FL is equivalent to CE. Shown blue curve in Fig

Intuitively, the modulating factor reduces the loss contribution from easy examples and extends the range in which an example receives the low loss.

You can read about Focal loss in detail in this article (link to my Focal loss article.), Where I’ve talked about the evolution of cross-entropy into Focal loss, the need for focal loss, comparison of focal loss with Cross entropy.

And the cherry on top, I’ve used a couple of examples to explain why Focal loss is better than cross-entropy.

Now let’s see the implementation of RetinaNet to build Face mask Detector in Python –

 

Build Face mask detector using RetinaNet model

Gather Data

Any deep learning model would require a large volume of training data to give good results on the test data. In this article (Link to my Web scrapping article), I’ve talked about the Web Scrapping method to gather a large volume of images for your deep learning project.

 

Create Data Set: –

We start by creating annotations for the training and validation dataset, using the tool LabelImg. This excellent annotation tool lets you quickly annotate the bounding boxes of the objects to train the machine learning model.

You can install it using below command in anaconda command prompt

pip install labelImg

You can annotate each JPEG file using labelmg tool like below and it’ll generate XML files with coordinates of each bounding box. And we’ll use these xml files to train our model.

Face Mask Detector using RetinaNet : xml files

 

Model Training: –

Step1. Clone & install the keras-retinanet repository

import os
print(os.getcwd())
git clone https://github.com/fizyr/keras-retinanet.git
%cd keras-retinanet/
!pip install .
!python setup.py build_ext --inplace

 

Step2. Import all required libraries

import numpy as np
import shutil
import pandas as pd
import os, sys, random
import xml.etree.ElementTree as ET
import pandas as pd
from os import listdir
from os.path import isfile, join
import matplotlib.pyplot as plt
from PIL import Image
import requests
import urllib
from keras_retinanet.utils.visualization import draw_box, draw_caption , label_color
from keras_retinanet.utils.image import preprocess_image, resize_image

 

Step3. import JPEG & xml data

pngPath='C:/Users/PraveenKumar/RetinaNet//maskDetectorJPEGImages/'
annotPath='C:/Users/PraveenKumar/RetinaNet//maskDetectorXMLfiles/'

data=pd.DataFrame(columns=['fileName','xmin','ymin','xmax','ymax','class'])

os.getcwd()
#read All files
allfiles = [f for f in listdir(annotPath) if isfile(join(annotPath, f))]
#Read all pdf files in images and then in text and store that in temp folder 
#Read all pdf files in images and then in text and store that in temp folder
for file in allfiles:
#print(file)
if (file.split(".")[1]=='xml'):

fileName='C:/Users/PraveenKumar/RetinaNet/maskDetectorJPEGImages/'+file.replace(".xml",'.jpg')
        tree = ET.parse(annotPath+file)
        root = tree.getroot()
        for obj in root.iter('object'):
            cls_name = obj.find('name').text
            xml_box = obj.find('bndbox')
            xmin = xml_box.find('xmin').text
            ymin = xml_box.find('ymin').text
            xmax = xml_box.find('xmax').text
            ymax = xml_box.find('ymax').text
            # Append rows in Empty Dataframe by adding dictionaries
            data = data.append({'fileName': fileName, 'xmin': xmin, 'ymin':ymin,'xmax':xmax,'ymax':ymax,'class':cls_name}, ignore_index=True)

data.shape

 

Step3. Write a function to show bounding boxes on the training dataset

 # pick a random image
  filepath = df.sample()['fileName'].values[0]

  # get all rows for this image
  df2 = df[df['fileName'] == filepath]
  im = np.array(Image.open(filepath))

  # if there's a PNG it will have alpha channel
  im = im[:,:,:3]

  for idx, row in df2.iterrows():
    box = [
      row['xmin'],
      row['ymin'],
      row['xmax'],
      row['ymax'],
    ]
    print(box)
    draw_box(im, box, color=(255, 0, 0))

  plt.axis('off')
  plt.imshow(im)
  plt.show()                        

show_image_with_boxes(data)

Face Mask Detector using RetinaNet : Mask

 

#Check few records of data
data.head()

Face Mask Detector using RetinaNet : Head

 

#Define labels & write them in a file
classes = ['mask','noMask']
with open('../maskDetectorClasses.csv', 'w') as f:
  for i, class_name in enumerate(classes):
    f.write(f'{class_name},{i}\n')         

if not os.path.exists('snapshots'):
  os.mkdir('snapshots')

Note: – It’s better to start with a pre-trained model in lieu of training a model from scratch. We’ll use the ResNet50 model that’s already pre-trained on the Coco dataset.

URL_MODEL = 'https://github.com/fizyr/keras-retinanet/releases/download/0.5.1/resnet50_coco_best_v2.1.0.h5'
urllib.request.urlretrieve(URL_MODEL, PRETRAINED_MODEL)

 

Step4. Train RetinaNet Model

Note: – You can use below snippet of code to train your model if you’re using Google Colab.

#Put your training data path & file that has labels for your training data
!keras_retinanet/bin/train.py --freeze-backbone \ --random-transform \ --weights {PRETRAINED_MODEL} \ --batch-size 8 \ --steps 500 \ --epochs 15 \ csv maskDetectorData.csv maskDetectorClasses.csv

But If you’re training on your local Jupyter notebook or different IDE then you can below command from your command prompt

python keras_retinanet/bin/train.py --freeze-backbone 
            --random-transform \
            --weights {PRETRAINED_MODEL} 
            --batch-size 8 
            --steps 500 
            --epochs 15  
            csv maskDetectorData.csv maskDetectorClasses.csv

Let’s analyze each argument passed to the script train.py.

  1. freeze-backbone: freeze the backbone layers, particularly useful when we use a small dataset, to avoid overfitting
  2. random-transform: randomly transform the dataset to get data augmentation
  3. weights: initialize the model with a pre-trained model (your own model or one released by Fizyr)
  4. batch-size: training batch size, the higher value gives a smoother learning curve
  5. steps: Number of steps for epochs
  6. epochs: number of epochs to train
  7. csv: annotations files generated by the script above

 

Step5. Load Trained Model

from glob import glob
model_paths = glob('snapshots/resnet50_csv_0*.h5')
latest_path = sorted(model_paths)[-1]
print("path:", latest_path)


from keras_retinanet import models

model = models.load_model(latest_path, backbone_name='resnet50')
model = models.convert_model(model)

label_map = {}
for line in open('../maskDetectorClasses.csv'):
  row = line.rstrip().split(',')
  label_map[int(row[1])] = row[0]

 

Model Testing: –

Step6. Predict using trained model

#Write a function to choose one image randomly from your dataset and predict using Trained model.
def show_image_with_predictions(df, threshold=0.6):
  # choose a random image
  row = df.sample()
  filepath = row['fileName'].values[0]
  print("filepath:", filepath)
  # get all rows for this image
  df2 = df[df['fileName'] == filepath]
  im = np.array(Image.open(filepath))
  print("im.shape:", im.shape)

  # if there's a PNG it will have alpha channel
  im = im[:,:,:3]

  # plot true boxes
  for idx, row in df2.iterrows():
    box = [
      row['xmin'],
      row['ymin'],
      row['xmax'],
      row['ymax'],
    ]
    print(box)
    draw_box(im, box, color=(255, 0, 0))

  ### plot predictions ###

  # get predictions
  imp = preprocess_image(im)
  imp, scale = resize_image(im)

  boxes, scores, labels = model.predict_on_batch(
    np.expand_dims(imp, axis=0)
  )

  # standardize box coordinates
  boxes /= scale

  # loop through each prediction for the input image
  for box, score, label in zip(boxes[0], scores[0], labels[0]):
    # scores are sorted so we can quit as soon
    # as we see a score below threshold
    if score < threshold:
      break

    box = box.astype(np.int32)
    color = label_color(label)
    draw_box(im, box, color=color)

    class_name = label_map[label]
    caption = f"{class_name} {score:.3f}"
    draw_caption(im, box, caption)
    score, label=score, label
  plt.axis('off')
  plt.imshow(im)
  plt.show()
  return score, label
plt.rcParams['figure.figsize'] = [20, 10]

 

#Feel free to change threshold as per your business requirement score, 
label=show_image_with_predictions(data, threshold=0.6)

Face Mask Detector using RetinaNet : Image

#Feel free to change threshold as per your business requirement score, 
label=show_image_with_predictions(data, threshold=0.6)

#Feel free to change it as per your business requirement score, 
label=show_image_with_predictions(data, threshold=0.6)

#Feel free to change it as per your business requirement score, 
label=show_image_with_predictions(data, threshold=0.6)

 

#Feel free to change it as per your business requirementscore, 
label=show_image_with_predictions(data, threshold=0.6)

#Feel free to change it as per your business requirement
score, label=show_image_with_predictions(data, threshold=0.6)

#Feel free to change it as per your business requirement
score, label=show_image_with_predictions(data, threshold=0.6)

#Feel free to change it as per your business requirement
score, label=show_image_with_predictions(data, threshold=0.6)

 

References: –

http://arxiv.org/abs/1605.06409.

https://arxiv.org/pdf/1708.02002.pdf

https://developers.arcgis.com/python/guide/how-retinanet-works/

https://github.com/fizyr/keras-retinanet

https://www.freecodecamp.org/news/object-detection-in-colab-with-fizyr-retinanet-efed36ac4af3/

https://deeplearningcourses.com/

https://blog.zenggyu.com/en/post/2018-12-05/retinanet-explained-and-demystified/

 

Final Notes: –

To conclude, we went through the complete journey to make a face mask detector with the implementation of RetinaNet. We created a dataset, trained a model, and ran inference (here is my Github repo for the notebook and dataset).

Retina Net is a powerful model that uses Feature Pyramid Networks & ResNet as its backbone. I was able to get decent results for face mask detector with very limited dataset & very few epochs (6 epochs with 500 steps each) only. You can change the threshold value.

Note:-

  • Make sure you train your model at least for 20 epochs to get good results.
  • Idea is to submit the approach to build a face mask detector using the RetinaNet model. One can always tweak the model, data & approach as per business requirement.

In general, RetinaNet is a good choice to start an object detection project, in particular, if you need to quickly get good results.

If you enjoyed this article, leave a few claps, it will encourage me to explore further machine learning opportunities 🙂

About the Author

Author

Praveen Kumar Anwla

I’ve been working as a Data Scientist with product-based and Big 4 Audit firms for almost 5 years now. I have been working on various NLP, Machine learning & cutting edge deep learning frameworks to solve business problems. Please feel free to check out my personal blog, where I cover topics from Machine learning – AI, Chatbots to Visualization tools ( Tableau, QlikView, etc.) & various cloud platforms like Azure, IBM & AWS cloud.

 

Responses From Readers

Clear

Eclature tech
Eclature tech

Good information learned a lot

Senol
Senol

Thanks for the great article. Can RetinaNet be used for real-time detection?

Prithvi
Prithvi

Hi, wondering how the "MaskDetectorData.csv" file gets generated. Is it through a to_csv command on the dataframe "data" or some other method

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details