The human brain can easily recognize and distinguish the objects in an image. For instance, given the image of a cat and a dog, we distinguish the two within nanoseconds, and our brain perceives this difference. If a machine mimics this behavior, it is as close to Artificial Intelligence as possible. Subsequently, the field of Computer Vision aims to mimic the human vision system—and numerous milestones have broken the barriers in this regard. Moreover, nowadays, machines can easily distinguish between different images, detect objects and faces, and even generate images of people who don’t exist! Fascinating, isn’t it? One of my first experiences when starting with Computer Vision was the task of transfer learning models for Image Classification. This very ability of a machine to distinguish between objects leads to more avenues of research – like distinguishing between people.
The advent of Transfer Learning for Image Classification has accelerated the rapid developments in Computer Vision and, by extension, image classification. Transfer Learning for pretrained models for image classification allows us to use a pre-existing model, trained on a huge dataset, for our tasks. Consequently, it reduces the cost of training new deep-learning models, and since the datasets have been vetted, we can be assured of their quality.
This article will cover the top 4 pre-trained models for pretrained models for image classification models that are state-of-the-art (SOTA) and widely used in the industry. The individual models can be explained in more detail, but I have limited the article to give an overview of their architecture and implement it on a dataset.
With that this article will guide you about image classification models and the best image classification models available today.
Image classification involves recognizing and grouping images into distinct categories or labels according to their content. For instance, a model could categorize pictures as either “cats,” “dogs,” or “cars.” This is achieved through algorithms trained with numerous labeled images, aiding the model in identifying patterns and characteristics.
Also, With that, we will also be explaining four pre-trained models used for image classification.
Since we started with cats and dogs, let us use the Cat and Dog images dataset. The original training dataset on Kaggle has 25000 images of cats and dogs, and the test dataset has 10000 unlabelled images. I have taken a much smaller dataset since we only aim to understand these models. You can run this and the rest of the code on Google Colab, so let us get started!
Let us also import the basic libraries. Further, I will cover future imports depending on the model, including the best CNN model for image classification using Python:
Python Code:
import os
import zipfile
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras import layers
from tensorflow.keras import Model
import matplotlib.pyplot as plt
Some popular datasets are used in pretrained models for image classification across research, industry, and hackathons. The following are some of the prominent ones:
and many more.
We will first prepare the dataset and separate out the images for pre-trained models for image classification model:
The following code will let us check if the images have been loaded correctly:
Now that our dataset is ready, let’s move to the model-building stage. We will use four different pre-trained models on this dataset.
In case you want to learn computer vision in a structured format, refer to this course- Certified Computer Vision Master’s Program
In this section, we cover the 4 pre-trained models for image classification as follows-
The VGG-16 is one of the most popular pre-trained models for image classification. Introduced at the famous ILSVRC 2014 Conference, it was and remains THE model to beat even today. Developed at the Visual Graphics Group at the University of Oxford, VGG-16 beat the then-standard AlexNet and was quickly adopted by researchers and the industry for their image classification tasks.
Here is the architecture of VGG-16:
Here is a more intuitive layout of the VGG-16 Model.
The following are the layers of the model:
Let us explore the layers in detail:
As you can see, the model is sequential in nature and uses many filters. At each stage, small 3 * 3 filters are used to reduce the number of parameters. All the hidden layers use the ReLU activation function. Even then, the number of parameters is 138 Billion, which makes it a slower and much larger model to train than others.
Additionally, there are variations of the VGG16 model, which are improvements, like VGG19 (19 layers). You can find a detailed explanation
Let us now explore how to train a VGG-16 model on our dataset-
Step 1: Image Augmentation
Since we used a much smaller dataset of images earlier, we can compensate by augmenting this data and increasing our dataset size. If you are working with the original larger dataset, you can skip this step and build the model.
Step 2: Training and Validation Sets
Step 3: Loading the Base Model
We will use only the basic models, with changes to the final layer. This is because this is just a binary classification problem, while these models are built to handle up to 1000 classes.
Since we don’t have to train all the layers, we make them non_trainable:
Step 4: Compile and Fit
We will then build the last fully connected layer. I have just used the basic settings, but feel free to experiment with different dropout values, optimizers, and activation functions.
We will build the final model based on the training and validation sets we created earlier. Please note to use the original directories instead of the augmented datasets I have used below. I have used just 10 epochs, but you can also increase them to get better results:
Awesome! As you can see, we achieved a validation accuracy of 93% with just 10 epochs and without any major changes to the model. This is where we realize how powerful Transfer Learning for Image Classification is and how useful pre-trained models for image classification can be. A caveat here, though: VGG16 takes a long time to train compared to other models, which can be a disadvantage when dealing with huge datasets.
While researching this article, one thing was clear: The year 2014 has been iconic in developing really popular pre-trained models for pretrained models for image classification. While the above VGG-16 secured the 2nd rank in that year’s ILSVRC, the 1st rank was secured by none other than Google via its model GoogLeNet, or Inception as it is now later called.
The original paper proposed the Inceptionv1 Model. At only 7 million parameters, it was much smaller than the then prevalent models like VGG and AlexNet. You can see why it was a breakthrough model by adding a lower error rate. Not only this, but the major innovation in this paper was also another breakthrough – the Inception Module.
As can be seen, in simple terms, the Inception Module performs convolutions with different filter sizes on the input, performs Max Pooling, and concatenates the result for the next Inception module. The introduction of the 1 * 1 convolution operation drastically reduces the parameters.
Though the number of layers in Inceptionv1 is 22, the massive parameter reduction makes it a formidable model to beat.
The Inceptionv2 model was a major improvement on the Inceptionv1 model, which increased its accuracy and made it less complex. In the same paper as Inceptionv2, the authors introduced the Inceptionv3 model with a few more improvements on v2.
The following are the major improvements included:
While it is not possible to provide an in-depth explanation of Inception in this article, you can go through this comprehensive article covering the Inception Model in detail: Deep Learning in the Trenches: Understanding Inception Network from Scratch
As you can see the number of layers is 42, compared to VGG16’s paltry 16 layers. Also, Inceptionv3 reduced the error rate to only 4.2%.
Let’s see how to implement it in Python for this pretrained models for image classification model-
Step 1: Data Augmentation
You will note that I am not performing extensive data augmentation. The code is the same as before. I have just changed the image dimensions for each model.
Step 2: Training and Validation Generators
Step 3: Loading the Base Model
Step 4: Compile and Fit
Just like VGG-16, we will only change the last layer.
We perform the following operations:
We will then fit the image classification model:
As a result, we can see that we get 96% Validation accuracy in 10 epochs. Also, note that this model is much faster than VGG16. Each epoch takes around only 1/4th the time that each epoch in VGG16 does. Of course, you can always experiment with the different hyperparameter values and see how much better/worse it performs.
I liked studying the Inception model. While most models at that time were merely sequential and followed the premise that the deeper and larger the model, the better it would perform, Inception and its variants broke this mold. Just like its predecessors, Inceptionv3 achieved the top position in CVPR 2016 with only a 3.5% top-5 error rate.
Here is a link to the paper: Rethinking the Inception Architecture for Computer Vision
Just like Inceptionv3, ResNet50 is not the first image classification model from the ResNet family. The original model, the Residual net or ResNet, was another milestone in the CV domain back in 2015.
The main motivation behind this image classification model was to avoid poor accuracy as the model went on to become deeper. Additionally, if you are familiar with Gradient Descent, you would have encountered the Vanishing Gradient issue – the ResNet model also aimed to tackle this issue. Here is the architecture of the earliest variant: ResNet34(ResNet50 also follows a similar technique with just more layers)
You can see that after starting with a single convolutional layer and Max Pooling, there are 4 similar layers with varying filter sizes – all of them using 3 * 3 convolution operation. Also, after every 2 convolutions, we are bypassing/skipping the layer in between. This is the main concept behind ResNet models. These skipped connections are called ‘identity shortcut connections” and use what is called residual blocks:
In simple terms, the authors of ResNet propose that fitting a residual mapping is much easier than fitting the actual mapping and thus applying it to all the layers. Another interesting point to note is that the authors of ResNet are of the opinion that the more layers we stack, the model should not perform worse.
This is contrary to what we saw in Inception and is almost similar to VGG16 in the sense that it just stacks layers on top of each other. ResNet changes the underlying mapping.
The ResNet model has many variants, of which the latest is ResNet152. The following is the architecture of the ResNet family in terms of the layers used:
Let us now use ResNet50 on our dataset for image classification model:
Step 1: Data Augmentation and Generators
Step 2: Import the base model
Again, we are using only the basic ResNet model, so we will keep the layers frozen and only modify the last layer:
Step 3: Build and Compile the Model
I would like to show you an even shorter code for using the ResNet50 model. We will use this pretrained models for image classification model as a layer in a Sequential model and add a single Fully Connected Layer.
We compile the image classification model, and this time, let us try the SGD optimizer:
Step 4: Fitting the model
The following is the result we get-
You can see how well it performs on our dataset, making ResNet50 one of the most widely used Pre-trained models. Like VGG, it also has other variations, as seen in the table above. Remarkably, ResNet not only has its own variants, but it also spawned a series of architectures based on ResNet. These include ResNeXt, ResNet as an Ensemble, etc. Additionally, the ResNet50 is among the most popular image classification models out there and achieved a top-5 error rate of around 5%
The following is the link to the paper: Deep Residual Learning for Image Recognition
We finally came to the latest model among these four that has caused waves in this domain, and of course, it is from Google. In EfficientNet, the authors propose a new Scaling method called Compound Scaling. The long and short of it is this: The earlier models like ResNet follow the conventional approach of arbitrarily scaling the dimensions and adding more layers.
However, the paper proposes that if we simultaneously scale the dimensions by a fixed amount and do so uniformly, we achieve much better performance. The scaling coefficients can, in fact, be decided by the user.
Though this scaling technique can be used for any CNN-based model, the authors started off with their own baseline model called EfficientNetB0:
MBConv stands for mobile inverted bottleneck Convolution(similar to MobileNetv2). They also propose the Compound Scaling formula with the following scaling coefficients:
This formula is used to build a family of EfficientNets – EfficientNetB0 to EfficientNetB7 again. The following is a simple graph showing the comparative performance of this family vis-a-vis other popular models:
As you can see, even the baseline B0 model starts at a much higher accuracy, which only increases, and that too with fewer parameters. For instance, EfficientB0 has only 5.3 million parameters!
The simplest way to implement EfficientNet is to install it. The rest of the steps are similar to what we have seen above.
Installing EfficientNet:
!pip install -U efficientnet
Import it
Step 1: Image Augmentation
We will use the same image dimensions for VGG16 and ResNet50. By now, you would be familiar with the Augmentation process:
Step 2: Loading the Base Model
We will use the B0 version of EfficientNet since it is the simplest of the 8. I urge you to experiment with the rest of the models, though do keep in mind that they are becoming increasingly complex, which might not be best suited for a simple binary classification task.
Again, let us freeze the layers:
Step 3: Build the model
Just like Inceptionv3, we will perform these steps at the final layer:
Step 4: Compile and Fit
Let us again use the RMSProp Optimiser, though here, I have introduced a decay parameter:
We finally fit the model on our data:
There we go: we achieved a whopping 98% accuracy on our validation set in only 10 epochs. I urge you to try training the larger dataset with EfficientNetB7 and share the results below.
The following is the link to the paper: EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
In this article we found the top State-of-the-Art pre-trained models for image classification. Here is a handy table for you to refer to these models and their performance:
The exploration of pre-trained models for image classification reveals the remarkable advancements in the field of Computer Vision. Each model discussed—VGG-16, Inception, ResNet50, and EfficientNet—represents significant strides in achieving near-human-level accuracy in recognizing and categorizing images. Thus, pre-trained models have transformed the landscape of image classification, making state-of-the-art techniques accessible for a wide range of applications. By understanding and utilizing these models, practitioners can significantly enhance the efficiency and accuracy of their computer vision tasks, paving the way for further innovations and applications in the field. As the technology continues to evolve, it will be exciting to see how future models build upon these foundations to achieve even greater feats in artificial intelligence.
Hope you like this guide about the best image classification models and how they enhance our understanding of visual data through advanced techniques.
A. Pre-trained models for image classification are models previously trained on large datasets like ImageNet. They can be fine-tuned for specific tasks, saving time and computational resources.
A. For medical image classification, pre-trained models like VGG16, ResNet, and DenseNet, particularly those adapted with domain-specific datasets such as CheXNet for chest X-rays, are effective.
A. Pre-training models involves training on a large, general dataset before fine-tuning on a specific task. This leverages learned features and accelerates convergence.
A. The best models for image classification include CNN-based architectures like ResNet, VGG, Inception, and EfficientNet, known for their high accuracy and efficiency.
Hi, thank you for this article. I tried the InceptionV3 model on my custom data but I found drastically bad predictions. I found out that the model was predicting 1 class 99% of the time. I followed the step in this article and tried changing parameters also but same problem. What could the issue be?
i executed all the above code in google colab mam, iam doing research with medical images. These medical images will be of gray colour, but for these above networks we need colour images or of depth 3. How to get the gray images to colour or of depth 3 without any distortions in image.
Hi, Mrs. Purva Huilgol, what a content, thanks for it! i've been try the code on my case. It's all work for VGG-16, Inception and RestNet50. But there are some error for EfficientNet. When i try to build the model, it said that name 'model' is not defined. When i try to add the model from the RestNet50, antoher error appeared TypeError: ('Keyword argument not understood:', 'inputs'). Is there any reason why this happen? However, i'm new in python. Sorry for my lack of experience.