Gradient-weighted Class Activation Mapping is a technique used in deep learning to visualize and understand the decisions made by a CNN. This technique unveils the hidden decisions made by CNNs, transforming them from opaque models into transparent storytellers. Picture this as a magic lens that paints a vivid heatmap, spotlighting the essence of an image that captivates the neural network‘s attention. How does it work? Grad-CAM decodes the importance of each feature map for a specific class by analyzing gradients in the last convolutional layer.
Grad-CAM interprets CNNs, revealing insights into predictions, aiding debugging, and enhancing performance. Class-discriminative and localizing, it lacks pixel-space detail highlighting.
This article was published as a part of the Data Science Blogathon.
Grad-CAM stands for Gradient-weighted Class Activation Mapping. It’s a technique used in deep learning, particularly with convolutional neural networks (CNNs), to understand which regions of an input image are important for the network’s prediction of a particular class. Grad-CAM is a technique that retains the architecture of deep models while offering interpretability without compromising accuracy. Grad-CAM is highlighted as a class-discriminative localization technique that generates visual explanations for CNN-based networks without architectural changes or re-training. The passage compares Grad-CAM with other visualization methods, emphasizing the importance of being class-discriminative and high-resolution in generating visual explanations.
Grad-CAM generates a heatmap that highlights the crucial regions of an image by analyzing the gradients flowing into the last convolutional layer of the CNN. By computing the gradient of the predicted class score concerning the feature maps of the last convolutional layer, Grad-CAM determines the importance of each feature map for a specific class.
Take your AI innovations to the next level with GenAI Pinnacle. Fine-tune models like Gemini and unlock endless possibilities in NLP, image generation, and more. Dive in today! Explore Now
Grad-CAM visualisation is required because it addresses the critical need for interpretability in deep learning models. It provides a way to visualize and comprehend how these models arrive at their predictions without sacrificing their accuracy in various computer vision tasks.
+---------------------------------------+
| |
| Convolutional Neural Network |
| |
+---------------------------------------+
|
| +-------------+
| | |
+->| Prediction |
| |
+-------------+
|
|
+-------------+
| |
| Grad-CAM |
| |
+-------------+
|
|
+-----------------+
| |
| Class Activation|
| Map |
| |
+-----------------+
Grad-CAM (Gradient-weighted Class Activation Mapping) visualisation is a technique used in computer vision, specifically in deep learning models based on Convolutional Neural Networks (CNNs). It addresses the challenge of interpretability in these complex models by highlighting the important regions in an input image contributing to the network’s predictions.
Grad-CAM generates heatmaps known as Class Activation Maps. These maps highlight crucial regions in an image responsible for specific predictions made by CNN.
It does so by analyzing gradients flowing into the final convolutional layer of the CNN, focusing on how these gradients impact class predictions.
Grad-CAM stands out among visualization techniques due to its class-discriminative nature. Unlike other methods, it provides visualizations specific to particular predicted classes, enhancing interpretability.
Grad-CAM computes gradients of predicted class scores concerning the activations in the last convolutional layer. These gradients signify the importance of each activation map for predicting specific classes.
It precisely identifies and highlights regions in input images that significantly contribute to predictions for specific classes, enabling a deeper understanding of model decisions.
Grad-CAM’s adaptability spans various CNN architectures without requiring architectural changes or retraining. It applies to models handling diverse inputs and outputs, ensuring broad usability across tasks.
Grad-CAM visualization allows for understanding the decision-making processes of complex models without sacrificing their accuracy, striking a balance between model interpretability and high performance.
code to generate Grad-CAM heatmaps for a pre-trained Xception model in Keras. However, some parts are missing in the code, such as defining the model, loading the image, and generating the heatmap.
from IPython.display import Image, display
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import keras
model_builder = keras.applications.xception.Xception
img_size = (299, 299)
preprocess_input = keras.applications.xception.preprocess_input
decode_predictions = keras.applications.xception.decode_predictions
last_conv_layer_name = "block14_sepconv2_act"
## The local path to our target image
img_path= "<your_image_path>"
display(Image(img_path))
def get_img_array(img_path, size):
## `img` is a PIL image
img = keras.utils.load_img(img_path, target_size=size)
array = keras.utils.img_to_array(img)
## We add a dimension to transform our array into a "batch"
array = np.expand_dims(array, axis=0)
return array
def make_gradcam_heatmap(img_array, model, last_conv_layer_name, pred_index=None):
## First, we create a model that maps the input image to the activations
## of the last conv layer as well as the output predictions
grad_model = keras.models.Model(
model.inputs, [model.get_layer(last_conv_layer_name).output, model.output]
)
## Then, we compute the gradient of the top predicted class for our input image
## for the activations of the last conv layer
with tf.GradientTape() as tape:
last_conv_layer_output, preds = grad_model(img_array)
if pred_index is None:
pred_index = tf.argmax(preds[0])
class_channel = preds[:, pred_index]
## We are doing transfer learning on last layer
grads = tape.gradient(class_channel, last_conv_layer_output)
## This is a vector where each entry is the mean intensity of the gradient
pooled_grads = tf.reduce_mean(grads, axis=(0, 1, 2))
## calculates a heatmap highlighting the regions of importance in an image
## for a specific
## predicted class by combining the output of the last convolutional layer
## with the pooled gradients.
last_conv_layer_output = last_conv_layer_output[0]
heatmap = last_conv_layer_output @ pooled_grads[..., tf.newaxis]
heatmap = tf.squeeze(heatmap)
## For visualization purpose
heatmap = tf.maximum(heatmap, 0) / tf.math.reduce_max(heatmap)
return heatmap.numpy()
Output:
Creating the Heatmap for the image with the model
## Preparing the image
img_array = preprocess_input(get_img_array(img_path, size=img_size))
## Making the model with imagenet dataset
model = model_builder(weights="imagenet")
## Remove last layer's softmax(transfer learning)
model.layers[-1].activation = None
preds = model.predict(img_array)
print("Predicted of image:", decode_predictions(preds, top=1)[0])
## Generate class activation heatmap
heatmap = make_gradcam_heatmap(img_array, model, last_conv_layer_name)
## visulization of heatmap
plt.matshow(heatmap)
plt.show()
Output:
The save_and_display_gradcam function takes an image path and Grad-CAM heatmap. It overlays the heatmap on the original image and saves and displays the new visualization.
def save_and_display_gradcam(img_path, heatmap, cam_path="save_cam_image.jpg", alpha=0.4):
## Loading the original image
img = keras.utils.load_img(img_path)
img = keras.utils.img_to_array(img)
## Rescale heatmap to a range 0-255
heatmap = np.uint8(255 * heatmap)
## Use jet colormap to colorize heatmap
jet = mpl.colormaps["jet"]
jet_colors = jet(np.arange(256))[:, :3]
jet_heatmap = jet_colors[heatmap]
## Create an image with RGB colorized heatmap
jet_heatmap = keras.utils.array_to_img(jet_heatmap)
jet_heatmap = jet_heatmap.resize((img.shape[1], img.shape[0]))
jet_heatmap = keras.utils.img_to_array(jet_heatmap)
## Superimpose the heatmap on original image
Superimposed_img = jet_heatmap * alpha + img
Superimposed_img = keras.utils.array_to_img(Superimposed_img)
## Save the superimposed image
Superimposed_img.save(cam_path)
## Displaying Grad CAM
display(Image(cam_path))
save_and_display_gradcam(img_path, heatmap)
Output:
GradCAM visualization has several applications and use cases in the field of computer vision and model interpretability:
Let us now look at some challenges and limitations of using Grad-CAM visualisation.
Gradient-weighted Class Activation Mapping (Grad-CAM), designed to enhance the interpretability of CNN-based models. Grad-CAM generates visual explanations, shedding light on the decision-making process of these models. Combining gradcam visualization with existing high-resolution visualization methods led to the creation of Guided Grad-CAM visualizations, offering superior interpretability and fidelity to the original model. It is a valuable tool for enhancing the interpretability of deep learning models, particularly Convolutional Neural Networks (CNNs), by providing visual explanations for their decisions. Despite its advantages, Grad-CAM comes with its set of challenges and limitations.
Human studies demonstrated the effectiveness of these visualizations, showcasing improved class discrimination, increased classifier trustworthiness transparency, and the identification of biases within datasets. Additionally, the technique identified crucial neurons and provided textual explanations for model decisions, contributing to a more comprehensive understanding of model behavior. Grad-CAM’s reliance on gradients, subjectivity in interpretation, and computational overhead pose challenges, impacting its usability in real-time applications or in highly complex models.
Dive into the future of AI with GenAI Pinnacle. Empower your projects with cutting-edge capabilities, from training bespoke models to tackling real-world challenges like PII masking. Start Exploring.
A. Grad-CAM (Gradient-weighted Class Activation Mapping) is a deep learning technique that helps visualize where a convolutional neural network (CNN) is looking when making a prediction. Grad-CAM creates heatmaps that highlight important regions in an image. These regions influence the model’s decision, which helps to understand and interpret how the model is working.
A. CAM (Class Activation Mapping): Requires modifying the CNN architecture by adding a global average pooling layer before the final classification layer. This limits its use to specific types of models.
Grad-CAM: Can be applied to any CNN without changing the architecture. It uses the gradients of any target class flowing into the final convolutional layer to produce a heatmap, making it more flexible and widely applicable.
A. Model-Agnostic: Works with any CNN architecture.
No Need for Model Modification: Doesn’t require changes to the model structure.
Interpretability: Helps in understanding which parts of the image are important for the model’s predictions, useful for debugging and improving models.
Versatile: Can be used for various tasks like classification, captioning, and more.
A. Grad-CAM shows where a CNN focuses by generating heatmaps that highlight an image’s most relevant regions for predicting a specific class. These heatmaps visually indicate which parts of the image contribute most to the model’s decision.