Swin Transformers | Modern Computer Vision Tasks

Mobarak Inuwa Last Updated : 18 Aug, 2023
7 min read

Introduction

The Swin Transformer is a significant innovation in the field of vision transformers. Transformers‘ exceptional performance has been demonstrated in various tasks. Among these transformers, the Swin Transformer stands out as the backbone of computer vision, providing unparalleled flexibility and scalability to meet the demands of modern deep-learning models. It’s time to unlock the full potential of this transformer and witness its impressive capabilities.

Learning Objectives

In this article, we aim to introduce Swin Transformers, a powerful class of hierarchical vision transformers. By the end of this article, you should understand:

  • Swin Transformers’ key features
  • Their applications as backbones in computer vision models and
  • The benefits of Swin Transformers in various computer vision tasks, such as image classification, object detection, and instance segmentation.

This article was published as a part of the Data Science Blogathon.

Understanding Swin Transformers

In a 2021 paper titled “Swin Transformer: Hierarchical Vision Transformer using Shifted Windows,” Ze Liu, Yutong Lin, Yue Cao, Han Hu, and Yixuan Wei introduced Swin Transformers. These transformers differ from traditional ones, which process images patch by patch. Swin Transformers divide the image into non-overlapping shifted windows, allowing efficient and scalable computation.

 Source: Liu et al. (2021) | Swin Transformers
Source: Liu et al. (2021)

The use of shifted windows is imperative in Swin Transformers. Its hierarchical design effectively resolves the problem of quadratic complexity found in vanilla transformers (encoder and decoder) when dealing with high-resolution images. This design feature also allows Swin Transformers to easily adapt to different image sizes, making them ideal for small and large datasets.

Difference Between a Swin Transformer and ViT

The first thing to note here is that Swin Transformer’s approach to processing images is in patches. Secondly, the Swin Transformer is a variation of the original Vision Transformer (ViT). It introduces hierarchical partitioning of the image into patches and then merges them as the network goes deeper. This helps to capture both local and global features effectively.

Breakdown of the Process in Detail

  • Patch Creation: Instead of using a fixed patch size as in ViT (e.g., 18×18 pixels), the Swin Transformer starts with smaller patches in the initial layers. For example, say 16×16 pixels patches.
  • Color Channels: Each of the patches corresponds to a small portion of the image, and each patch is treated as a colored image with three channels of its own which are commonly represented as red, green, and blue channels.
  • Patch Feature Dimensionality: Using the above example of 16 by 16, a single patch has a total of 768 feature dimensions i.e. 16x16x3 = 768. These dimensions correspond to the pixel values in the 16×16 patch for each of the three color channels.
  • Linear Transformation: After forming these patches, they are linearly transformed into a higher-dimensional space. This transformation helps the network to learn meaningful representations from the pixel values in the patches.
  • Hierarchical Partitioning: As the network goes deeper, these smaller patches are merged into larger ones. This hierarchical partitioning allows the model to capture both local details (from small patches) and global context (from merged patches) effectively.

The Swin Transformer’s approach of gradually merging patches as the network depth increases helps the model to maintain a balance between local and global information, which can be crucial for understanding images effectively. Swin Transformer stills goes on to introduce multiple additional concepts and optimizations using window-based self-attention and shifting windows as we saw above to reduce computation, all of which contribute to its improved performance for image tasks.

Features of Swin Transformers

  • Input Padding: Swin Transformers offer the advantage of supporting any input height and width if divisible by 32 which makes it flexible. This ensures the model handles images of varying dimensions, providing more flexibility during the preprocessing step.
  • Output Hidden States: Swin Transformers allow users to access hidden_states and reshaped_hidden_states when the `output_hidden_states` parameter is set to True during training or inference. The `hidden_states` output has a shape of (batch_size, sequence_length, num_channels), typical of transformers. In contrast, the `reshaped_hidden_states` output has a shape of (batch_size, num_channels, height, width), making it more suitable for downstream computer vision tasks.
  • Using AutoImageProcessor API: To prepare images for the Swin Transformer model, developers, and researchers can take advantage of the AutoImageProcessor API. This API simplifies the image preprocessing step by handling tasks such as resizing, data augmentation, and normalization, ensuring that the input data is ready for consumption by the Swin Transformer model.
  • Vision Backbone: Swin Transformers architectures are versatile making them serve as a powerful backbone for computer vision. As a backbone, Swin Transformers excel in tasks like object detection, instance segmentation, and image classification which we will see below. This adaptability makes them a great choice for designing state-of-the-art vision models.

Applications of Swin Transformers

1. Swin for Image Classification

Image classification involves being able to identify the class of an image. Swin Transformers have demonstrated impressive performance on image classification tasks. By leveraging their ability to model long-range dependencies effectively, they excel in capturing intricate patterns and spatial relationships within images. This can be seen as a Swin model transformer with an image classification head on top.

Swin Classification Demo

Let us see the use case of Swin for image classification. First things first. We install and import our libraries and load the image:

!pip install transformers torch datasets

Find the entire code on GitHub.

Load image

# Import necessary libraries
from transformers import AutoImageProcessor, SwinForImageClassification
import torch

# Accesssing images from the web
import urllib.parse as parse
import os
from PIL import Image
import requests

# Verify url
def check_url(string):
    try:
        result = parse.urlparse(string)
        return all([result.scheme, result.netloc, result.path])
    except:
        return False

# Load an image
def load_image(image_path):
    if check_url(image_path):
        return Image.open(requests.get(image_path, stream=True).raw)
    elif os.path.exists(image_path):
        return Image.open(image_path)

# Display Image
url = "https://img.freepik.com/free-photo/male-female-lions-laying-sand-resting_181624-2237.jpg?w=740&t=st=1690535667~exp=1690536267~hmac=0f5fb82df83f987848335b8bc5c36a1ee534f40301d2b7c095a2e5a62ff153fd"
image = load_image(url)

image
 Source: Freepik | Swin Transformers
Source: Freepik

Loading AutoImageProcessor and Swin

# Load the pre-trained image processor (AutoImageProcessor)
# The "microsoft/swin-tiny-patch4-window7-224" is the model checkpoint used for processing images
image_processor = AutoImageProcessor.from_pretrained("microsoft/swin-tiny-patch4-window7-224")

# Load the pre-trained Swin Transformer model for image classification
model = SwinForImageClassification.from_pretrained("microsoft/swin-tiny-patch4-window7-224")

# Prepare the input for the model using the image processor
# The image is preprocessed and converted to PyTorch tensors
inputs = image_processor(image, return_tensors="pt")

Now we perform inference and Predict the labels

# Perform inference using the Swin Transformer model
# The logits are the raw output from the model before applying softmax
with torch.no_grad():
    logits = model(**inputs).logits

# Predict the label for the image by selecting the class with the highest logit value
predicted_label = logits.argmax(-1).item()

# Retrieve and print the predicted label using the model's id2label mapping
print(model.config.id2label[predicted_label])

Prediction Class

lion, king of beasts, Panthera leo

2. Masked Image Modeling (MIM)

The process involves masking an input image randomly and then reconstructing it through the pre-text task. This is an application for Swin Model with a decoder on top for masked image modeling. MIM is a rising vision method for self-supervised learning with pre-trained methods. It has been successful across numerous downstream vision tasks with Vision transformers (ViTs).

Masked Image Modeling Demo

We will reuse the above code imports. Find the entire code on GitHub. Now let’s load a new image.

# Load an image from the given URL
url = "https://img.freepik.com/free-photo/outdoor-shot-active-dark-skinned-man-running-morning-has-regular-trainings-dressed-tracksuit-comfortable-sneakers-concentrated-into-distance-sees-finish-far-away_273609-29401.jpg?w=740&t=st=1690539217~exp=1690539817~hmac=ec8516968123988e70613a3fe17bca8c558b0e588f89deebec0fc9df99120fd4"
image = Image.open(requests.get(url, stream=True).raw)
image
 Source: Freepik | Swin Transformers
Source: Freepik

Loading AutoImageProcessor and the Masked Image Model

# Load the pre-trained image processor (AutoImageProcessor)
# "microsoft/swin-base-simmim-window6-192" is the model checkpoint used for processing images
image_processor = AutoImageProcessor.from_pretrained("microsoft/swin-base-simmim-window6-192")

# Load the pre-trained Swin Transformer model for Masked Image Modeling
model = SwinForMaskedImageModeling.from_pretrained("microsoft/swin-base-simmim-window6-192")

# Calculate the number of patches based on the image and patch size
num_patches = (model.config.image_size // model.config.patch_size) ** 2

# Convert the image to pixel values and prepare inputs for the model
pixel_values = image_processor(images=image, return_tensors="pt").pixel_values

# Create a random boolean mask of shape (batch_size, num_patches)
bool_masked_pos = torch.randint(low=0, high=2, size=(1, num_patches)).bool()

# Perform masked image modeling on the Swin Transformer model
outputs = model(pixel_values, bool_masked_pos=bool_masked_pos)

# Retrieve the loss and the reconstructed pixel values from the model's outputs
loss, reconstructed_pixel_values = outputs.loss, outputs.reconstruction

# Print the shape of the reconstructed pixel values
print(list(reconstructed_pixel_values.shape))
Swin Transformers

Above we see the reconstructed pixel values. Lastly, let us highlight some other applications.

Other applications could be for object detection and instance segmentation. The application of object detection would help to identify a particular part of an image while in instance segmentation, swin transformers detect and segment individual objects within an image.

Conclusion

We have seen Swin Transformers which has emerged as a groundbreaking advancement in the field of computer vision, offering a flexible, scalable, and efficient solution for a wide range of visual recognition tasks. Using a hierarchical design and handling images of varying sizes, Swin Transformers continue to pave the way for new breakthroughs in the world of deep learning and computer vision applications. As the field of vision transformers progresses, it is likely that Swin Transformers will remain at the forefront of cutting-edge research and practical implementations. I hope this article has helped introduce you to the concept.

Key Takeaways

  • Swin Transformers are hierarchical vision transformers for computer vision tasks, offering scalability and efficiency in processing high-resolution images.
  • Swin Transformers can serve as backbones for various computer vision architectures, excelling in tasks like image classification, object detection, and instance segmentation.
  • The AutoImageProcessor API simplifies image preparation for Swin Transformers, handling resizing, augmentation, and normalization.
  • Their ability to capture long-range dependencies makes Swin Transformers a promising choice for modeling complex visual patterns.

Frequently Asked Questions

Q1. What makes Swin Transformers different from traditional vision models?

A. Swin Transformers stand out due to their hierarchical design, where images are divided into non-overlapping shifted windows. This design enables efficient computation and scalability to handle problems faced by vanilla transformers.

Q2. Can Swin Transformers be used in different computer vision tasks?

A. Swin Transformers are versatile and can be utilized as backbones in various computer vision tasks, including image classification, object detection, and instance segmentation, among others.

Q3. Can I fine-tune Swin Transformers on my specific computer vision task?

A. Swin Transformers are amenable to fine-tuning specific tasks, allowing researchers and developers to adapt them to their unique datasets and vision problems.

Q4. What advantages do Swin Transformers offer in image classification tasks?

A. Swin Transformers excel in image classification due to their ability to capture long-range dependencies and intricate spatial relationships in images, leading to improved recognition accuracy.

Q5. Are Swin Transformers suitable for object detection in complex scenes?

A. Swin Transformers have shown promise in object detection tasks, especially in complex scenes, where their hierarchical design and scalability prove advantageous in detecting objects with varying sizes and orientations.

  • Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., & Guo, B. (2021). Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. ArXiv. /abs/2103.14030
  • Xie, Z., Zhang, Z., Cao, Y., Lin, Y., Bao, J., Yao, Z., Dai, Q., & Hu, H. (2021). SimMIM: A Simple Framework for Masked Image Modeling. ArXiv. /abs/2111.09886

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

I am an AI Engineer with a deep passion for research, and solving complex problems. I provide AI solutions leveraging Large Language Models (LLMs), GenAI, Transformer Models, and Stable Diffusion.

Responses From Readers

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details