Are you working with image data? There are so many things we can do using computer vision algorithms:
In this article, we will talk about multi-label image classification, utilizing the power of deep learning and advanced methodologies. Instead of relying on conventional toy datasets, we draw inspiration from real-world scenarios, particularly movie and TV series posters, which inherently contain diverse visual elements representing various genres.
But how do we navigate this complex task effectively? Fear not; we will dig deep into the intricacies of building a multi-label image classification model, leveraging cutting-edge technologies such as convolutional neural networks (CNNs) and transfer learning. Along the way, we harness the capabilities of popular frameworks like TensorFlow, PyTorch, and scikit-learn, using their APIs to streamline development and implementation.
By leveraging transfer learning and pre-trained models, we expedite the training process and enhance the efficiency of our classifiers. Additionally, we explore the resources available on platforms like Kaggle, tapping into rich datasets and collaborative communities to fuel our experiments.
Whether you’re a seasoned practitioner or a curious enthusiast, join us as we unravel the mysteries of multi-label image classification, equipped with tensors, Kaggle datasets, and the latest advancements in deep learning.
Excited? Good, let’s dive in!
Let’s understand the concept of multi-label image classification with an intuitive example. Check out the below image:
The object in image 1 is a car. That was a no-brainer. However, there is no car in image 2 – only a group of buildings. Can you see where we are going with this? We have classified the images into two classes, i.e., car and non-car.
When we have only two classes in which the images can be classified, this is known as a binary image classification problem.
Let’s look at one more image:
How many objects did you identify? There are too many – a house, a pond with a fountain, trees, rocks, etc. So,
When we can classify an image into more than one class (as in the image above), it is known as a multi-label image classification problem.
Here’s a catch: most of us confuse multi-label and multi-class image classification. Even I was bamboozled the first time I came across these terms. Now that I understand the two topics better let me clarify the difference for you.
Suppose we are given images of animals to be classified into corresponding categories. For ease of understanding, let’s assume there are a total of 4 categories (cat, dog, rabbit, and parrot) in which a given image can be classified. Now, there can be two scenarios:
Let’s understand each scenario through examples, starting with the first one:
Here, we have images that contain only a single object. The keen-eyed among you will have noticed 4 different types of objects (animals) in this collection.
Each image here can only be classified as a cat, dog, parrot, or rabbit. There are no instances where a single image will belong to more than one category.
1. When there are more than two categories in which the images can be classified.
2. An image does not belong to more than one category.
If both of the above conditions are satisfied, it is referred to as a multi-class image classification problem.
Now, let’s consider the second scenario – check out the below images:
These are all labels of the given images. Each image here belongs to more than one class; hence, it is a multi-label image classification problem.
These two scenarios should help you understand the difference between multi-class and multi-label image classification. Connect with me in the comments section below this article if you need any further clarification.
Before we jump into the next section, I recommend going through this article – Build your First Image Classification Model in just 10 Minutes! It will help you understand how to solve a multi-class image classification problem.
Now that we have an intuition about multi-label image classification let’s dive into the steps you should follow to solve such a problem.
The first step is to get our data in a structured format. This applied to both binary and multi-class image classification.
You should have a folder containing all the images you want to train your model. For training this model, we also require the true labels of images. So, you should also have a .csv file that contains the names of all the training images and their corresponding true labels.
We will learn how to create this .csv file later in this article. For now, remember that the data should be in a particular format. Once the data is ready, we can divide the further steps as follows:
First, load all the images and then pre-process them per your project’s requirement. We create a validation set to check how our model will perform on unseen data (test data). We train our model on the training set and validate it using the validation set (standard machine learning practice).
The next step is to define the architecture of the model. This includes deciding the number of hidden layers, neurons in each layer, the activation function, etc.
Time to train our model on the training set! We pass the training images and their corresponding true labels to train the model. We also pass the validation images here to help us validate how well the model performs on unseen data.
Finally, we use the trained model to get predictions on new images.
The pre-processing steps for a multi-label image classification task will be similar to that of a multi-class problem. The key difference is in the step where we define the model architecture.
We use a softmax activation function in the output layer for a multi-class image classification model. We want to maximize the probability for each image for a single class. As the probability of one class increases, the probability of the other class decreases. So, we can say that the probability of each class is dependent on the other classes.
But in the case of multi-label image classification, we can have more than one label for a single image. We want the probabilities to be independent of each other. Using the softmax activation function will not be appropriate. Instead, we can use the sigmoid activation function. This will predict the probability for each class independently. It will internally create n models (n here is the total number of classes), one for each class, and predict the probability for each class.
The sigmoid activation function will turn the multi-label problem into an n-binary classification problem. So, for each image, we will get probabilities defining whether the image belongs to class 1 or not, and so on. Since we have converted it into a n-binary classification problem, we will use the binary_crossentropy loss. We aim to minimize this loss to improve the performance of the model.
We must make This major change while defining the model architecture for solving a multi-label image classification problem. The training part will be similar to that of a multi-class problem. We will pass the training images, their corresponding true labels, and the validation set to validate our model’s performance.
Finally, we will take a new image and use the trained model to predict the labels for this image. With me so far?
Congratulations on making it this far! Your reward – solving an awesome multi-label image classification problem in Python. That’s right – time to power up your favorite Python IDE!
Let’s set up the problem statement. We aim to predict the genre of a movie using just its poster image. Can you guess why it is a multi-label image classification problem? Think about it for a moment before you look below.
A movie can belong to more than one genre, right? It doesn’t just have to belong to one category, like action or comedy. The movie can be a combination of two or more genres. Hence, multi-label image classification.
The dataset we’ll be using contains the poster images of several multi-genre movies. I have made some changes in the dataset and converted it into a structured format, i.e. a folder containing the images and a .csv file for true labels. You can download the structured dataset from here. Below are a few posters from our dataset:
You can download the original dataset along with the ground truth values here if you wish.
Let’s get coding!
First, import all the required Python libraries:
Now, read the .csv file and look at the first five rows:
There are 27 columns in this file. Let’s print the names of these columns:
The genre column contains the list for each image, which specifies the genre of that movie. So, from the head of the .csv file, the genre of the first image is Comedy and Drama.
The remaining 25 columns are the one-hot encoded columns. So, if a movie belongs to the Action genre, its value will be 1; it is 0. The image can belong to 25 different genres.
We will build a model that will return to the genre of a given movie poster. But before that, do you remember the first step for building any image classification model?
That’s right – loading and preprocessing the data. So, let’s read in all the training images:
A quick look at the shape of the array:
There are 7254 poster images, and all the images have been converted to a shape of (400, 300, 3). Let’s plot and visualize one of the images:
This is the poster for the movie ‘Trading Places’. Let’s also print the genre of this movie:
This movie has a single genre – Comedy. Our model would next require the true label(s) for all these images. Can you guess the shape of the true labels for 7254 images?
Let’s see. We know there are a total of 25 possible genres. We will have 25 targets for each image, i.e., whether the movie belongs to that genre or not. So, all these 25 targets will be either 0 or 1.
We will remove the ID and genre columns from the train file and convert the remaining columns to an array, which will be the target for our images:
The shape of the output array is (7254, 25) as we expected. Now, let’s create a validation set that will help us check the performance of our model on unseen data. We will randomly separate 10% of the images as our validation set:
The next step is to define the architecture of our model. The output layer will have 25 neurons (equal to the number of genres), and we’ll use sigmoid as the activation function.
I will use a certain architecture (given below) to solve this problem. You can also modify this architecture by changing the number of hidden layers, activation functions, and other hyperparameters.
Let’s print our model summary:
Quite a lot of parameters to learn! Now, compile the model. I’ll use binary_crossentropy as the loss function and ADAM as the optimizer (again, you can use other optimizers as well):
Finally, we are at the most interesting part – training the model. We will train the model for 10 epochs and also pass the validation data that we created earlier to validate the model’s performance:
We can see that the training loss has been reduced to 0.24, and the validation loss is also in sync. What’s next? It’s time to make predictions!
The Game of Thrones (GoT) and Avengers fans – this one’s for you. Let’s take the posters for GoT and Avengers and feed them to our model. Download the poster for GOT and Avengers before proceeding.
Before making predictions, we need to preprocess these images using the same steps we saw earlier.
Now, we will predict the genre for these posters using our trained model. The model will tell us the probability for each genre, and we will take the top 3 predictions from that.
Impressive! Our model suggests Drama, Thriller, and Action genres for Game of Thrones. That classifies GoT pretty well in my opinion. Let’s try our model on the Avengers poster. Preprocess the image:
And then make the predictions:
The genres our model comes up with are Drama, Action, and Thriller. Again, these are pretty accurate results. Can the model perform equally well for Bollywood movies? Let’s find out. We will use this Golmal 3 poster.
You know what to do at this stage – load and preprocess the image:
And then predict the genre for this poster:
Golmaal 3 was a comedy and our model has predicted it as the topmost genre. The other predicted genres are Drama and Romance – a relatively accurate assessment. We can see that the model is able to predict the genres just by seeing their poster.
This is how we can solve a multi-label image classification problem. Our model performed well even though we only had around 7000 images for training it.
You can try to collect more posters for training. I suggest making the dataset so that all the genre categories will have comparatively equal distribution. Why?
Well, if a certain genre repeats in most training images, our model might overfit that genre. And for every new image, the model might predict the same genre. To overcome this problem, you should have an equal distribution of genre categories.
These are some of the key points you can try to improve your model’s performance. Any other you can think of? Let me know!
This article delved into multi-label image classification, exploring its nuances and applications. We addressed the complexity of predicting multiple genres from movie posters by leveraging deep learning techniques, particularly the sigmoid activation function and binary_crossentropy loss. Through meticulous annotation and preprocessing of training data, we constructed a robust classifier capable of discerning various genres with impressive accuracy.
Our model, trained on a diverse dataset, demonstrated its prowess by accurately predicting genres for iconic movies like Game of Thrones and Avengers. Furthermore, we highlighted the significance of data distribution in enhancing model performance, emphasizing the need for balanced training datasets. This journey elucidated the power and versatility of multi-label image classification beyond genre prediction, offering insights into its broader applications, such as automatic image tagging. As we conclude, we invite readers to embark on their experimentation, exploring novel avenues and pushing the boundaries of this fascinating field.
Ans. Multi-label classification in machine learning refers to assigning multiple labels to instances. Unlike multi-class classification, where each instance is assigned only one label, multi-label classification allows for multiple labels per instance. This is common in scenarios like image datasets where an image may contain multiple objects. Evaluation metrics such as the F1 score can be used to measure the performance of multi-label classification models trained using frameworks like Keras.
Ans. The sigmoid activation function is used in multi-label image classification because it allows for independent probability predictions for each class. Unlike softmax, which is used in multi-class classification and enforces that probabilities sum up to one across all classes, sigmoid treats each class prediction independently. This is crucial in multi-label classification tasks where an image can belong to multiple classes simultaneously. Using sigmoid, the model can predict the presence or absence of each label separately, effectively transforming the problem into a series of binary classification tasks.
Ans. In multi-label image classification, compared to single-label classification, challenges arise due to the complexity of predicting multiple labels simultaneously. Annotating data becomes more intricate, requiring comprehensive labeling for each class present. Deep learning classifiers such as CNNs must handle this complexity efficiently, often necessitating specialized techniques like sigmoid activation and binary cross-entropy loss. Evaluation metrics like the F1 score become crucial in accurately assessing the classifier’s performance. These challenges underscore the heightened intricacy of multi-label classification tasks in computer vision and machine learning.
Thanks Pulkit for explaining the Multi-Label Image Classification in such an easy way.
Glad you liked it Vijit!
Great Thanks for sharing
Amazing, thank you so much