The Swin Transformer is a significant innovation in the field of vision transformers. Transformers‘ exceptional performance has been demonstrated in various tasks. Among these transformers, the Swin Transformer stands out as the backbone of computer vision, providing unparalleled flexibility and scalability to meet the demands of modern deep-learning models. It’s time to unlock the full potential of this transformer and witness its impressive capabilities.
In this article, we aim to introduce Swin Transformers, a powerful class of hierarchical vision transformers. By the end of this article, you should understand:
This article was published as a part of the Data Science Blogathon.
In a 2021 paper titled “Swin Transformer: Hierarchical Vision Transformer using Shifted Windows,” Ze Liu, Yutong Lin, Yue Cao, Han Hu, and Yixuan Wei introduced Swin Transformers. These transformers differ from traditional ones, which process images patch by patch. Swin Transformers divide the image into non-overlapping shifted windows, allowing efficient and scalable computation.
The use of shifted windows is imperative in Swin Transformers. Its hierarchical design effectively resolves the problem of quadratic complexity found in vanilla transformers (encoder and decoder) when dealing with high-resolution images. This design feature also allows Swin Transformers to easily adapt to different image sizes, making them ideal for small and large datasets.
The first thing to note here is that Swin Transformer’s approach to processing images is in patches. Secondly, the Swin Transformer is a variation of the original Vision Transformer (ViT). It introduces hierarchical partitioning of the image into patches and then merges them as the network goes deeper. This helps to capture both local and global features effectively.
The Swin Transformer’s approach of gradually merging patches as the network depth increases helps the model to maintain a balance between local and global information, which can be crucial for understanding images effectively. Swin Transformer stills goes on to introduce multiple additional concepts and optimizations using window-based self-attention and shifting windows as we saw above to reduce computation, all of which contribute to its improved performance for image tasks.
Image classification involves being able to identify the class of an image. Swin Transformers have demonstrated impressive performance on image classification tasks. By leveraging their ability to model long-range dependencies effectively, they excel in capturing intricate patterns and spatial relationships within images. This can be seen as a Swin model transformer with an image classification head on top.
Swin Classification Demo
Let us see the use case of Swin for image classification. First things first. We install and import our libraries and load the image:
!pip install transformers torch datasets
Find the entire code on GitHub.
Load image
# Import necessary libraries
from transformers import AutoImageProcessor, SwinForImageClassification
import torch
# Accesssing images from the web
import urllib.parse as parse
import os
from PIL import Image
import requests
# Verify url
def check_url(string):
try:
result = parse.urlparse(string)
return all([result.scheme, result.netloc, result.path])
except:
return False
# Load an image
def load_image(image_path):
if check_url(image_path):
return Image.open(requests.get(image_path, stream=True).raw)
elif os.path.exists(image_path):
return Image.open(image_path)
# Display Image
url = "https://img.freepik.com/free-photo/male-female-lions-laying-sand-resting_181624-2237.jpg?w=740&t=st=1690535667~exp=1690536267~hmac=0f5fb82df83f987848335b8bc5c36a1ee534f40301d2b7c095a2e5a62ff153fd"
image = load_image(url)
image
Loading AutoImageProcessor and Swin
# Load the pre-trained image processor (AutoImageProcessor)
# The "microsoft/swin-tiny-patch4-window7-224" is the model checkpoint used for processing images
image_processor = AutoImageProcessor.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
# Load the pre-trained Swin Transformer model for image classification
model = SwinForImageClassification.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
# Prepare the input for the model using the image processor
# The image is preprocessed and converted to PyTorch tensors
inputs = image_processor(image, return_tensors="pt")
Now we perform inference and Predict the labels
# Perform inference using the Swin Transformer model
# The logits are the raw output from the model before applying softmax
with torch.no_grad():
logits = model(**inputs).logits
# Predict the label for the image by selecting the class with the highest logit value
predicted_label = logits.argmax(-1).item()
# Retrieve and print the predicted label using the model's id2label mapping
print(model.config.id2label[predicted_label])
Prediction Class
lion, king of beasts, Panthera leo
The process involves masking an input image randomly and then reconstructing it through the pre-text task. This is an application for Swin Model with a decoder on top for masked image modeling. MIM is a rising vision method for self-supervised learning with pre-trained methods. It has been successful across numerous downstream vision tasks with Vision transformers (ViTs).
Masked Image Modeling Demo
We will reuse the above code imports. Find the entire code on GitHub. Now let’s load a new image.
# Load an image from the given URL
url = "https://img.freepik.com/free-photo/outdoor-shot-active-dark-skinned-man-running-morning-has-regular-trainings-dressed-tracksuit-comfortable-sneakers-concentrated-into-distance-sees-finish-far-away_273609-29401.jpg?w=740&t=st=1690539217~exp=1690539817~hmac=ec8516968123988e70613a3fe17bca8c558b0e588f89deebec0fc9df99120fd4"
image = Image.open(requests.get(url, stream=True).raw)
image
Loading AutoImageProcessor and the Masked Image Model
# Load the pre-trained image processor (AutoImageProcessor)
# "microsoft/swin-base-simmim-window6-192" is the model checkpoint used for processing images
image_processor = AutoImageProcessor.from_pretrained("microsoft/swin-base-simmim-window6-192")
# Load the pre-trained Swin Transformer model for Masked Image Modeling
model = SwinForMaskedImageModeling.from_pretrained("microsoft/swin-base-simmim-window6-192")
# Calculate the number of patches based on the image and patch size
num_patches = (model.config.image_size // model.config.patch_size) ** 2
# Convert the image to pixel values and prepare inputs for the model
pixel_values = image_processor(images=image, return_tensors="pt").pixel_values
# Create a random boolean mask of shape (batch_size, num_patches)
bool_masked_pos = torch.randint(low=0, high=2, size=(1, num_patches)).bool()
# Perform masked image modeling on the Swin Transformer model
outputs = model(pixel_values, bool_masked_pos=bool_masked_pos)
# Retrieve the loss and the reconstructed pixel values from the model's outputs
loss, reconstructed_pixel_values = outputs.loss, outputs.reconstruction
# Print the shape of the reconstructed pixel values
print(list(reconstructed_pixel_values.shape))
Above we see the reconstructed pixel values. Lastly, let us highlight some other applications.
Other applications could be for object detection and instance segmentation. The application of object detection would help to identify a particular part of an image while in instance segmentation, swin transformers detect and segment individual objects within an image.
We have seen Swin Transformers which has emerged as a groundbreaking advancement in the field of computer vision, offering a flexible, scalable, and efficient solution for a wide range of visual recognition tasks. Using a hierarchical design and handling images of varying sizes, Swin Transformers continue to pave the way for new breakthroughs in the world of deep learning and computer vision applications. As the field of vision transformers progresses, it is likely that Swin Transformers will remain at the forefront of cutting-edge research and practical implementations. I hope this article has helped introduce you to the concept.
A. Swin Transformers stand out due to their hierarchical design, where images are divided into non-overlapping shifted windows. This design enables efficient computation and scalability to handle problems faced by vanilla transformers.
A. Swin Transformers are versatile and can be utilized as backbones in various computer vision tasks, including image classification, object detection, and instance segmentation, among others.
A. Swin Transformers are amenable to fine-tuning specific tasks, allowing researchers and developers to adapt them to their unique datasets and vision problems.
A. Swin Transformers excel in image classification due to their ability to capture long-range dependencies and intricate spatial relationships in images, leading to improved recognition accuracy.
A. Swin Transformers have shown promise in object detection tasks, especially in complex scenes, where their hierarchical design and scalability prove advantageous in detecting objects with varying sizes and orientations.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.