Salesforce BLIP: Revolutionizing Image Captioning

Maigari David Last Updated : 30 Mar, 2024
7 min read

Introduction

Image captioning is another exciting innovation in artificial intelligence and its contribution to computer vision. Salesforce’s new tool, BLIP, is a great leap. This image captioning AI model provides a great deal of interpretation through its working process. Bootstrapping Language-image Pretraining (BLIP) is a technology that generates captions from images with a high level of efficiency. 

Learning Objectives

  • Gain an insight into Salesforce’s BLIP Image Captioning model. 
  • Study the decoding strategies and text prompts of using this tool. 
  • Gain insight into the features and functionalities of BLIP image captioning. 
  • Learn real-life applications of this model and how to run inference. 

This article was published as a part of the Data Science Blogathon.

Understanding the BLIP Image Captioning

The BLIP image captioning model uses an exceptional deep learning technique to interpret an image into a descriptive caption. It also effortlessly generates image-to-text with high accuracy using natural language processing and computer vision. 

You can explore this model with several key features. Using a few text prompts allows you to get the most descriptive part of an image. You can easily find these prompts when you upload an image to the Salesforce BLIP captioning tool on a hugging face. Their functionalities are also great and effective. 

With this model, you can ask questions about the details of an uploaded picture’s colors or shape. They also use beam search and nucleus features to provide a descriptive image caption. 

The key Features and Functionalities of BLIP Image Captioning

This model has great accuracy and precision in recognizing objects and exhibiting real-life processing when providing captions to images. There are several features to explore with this tool. However, three main features define the capability of the BLIP image captioning tool. We’ll briefly discuss them here; 

BLIP’s Contextual Understanding

The context of an image is the game-changing detail that helps in the interpretation and captioning. For example, a picture of a cat and a mouse would not have a clear context if no relationship existed between them. Salesforce BLIP can understand the relationship between objects and use spatial arrangements to generate captions. This key functionality can help create a human-like caption, not just a generic one. 

So, your image gets a caption with a clear context, such as “a cat chasing a mouse under the table.” This generates a better context than a caption that reads “a cat and a mouse.”

Supports Multiple Language

Salesforce’s quest to cater to the global audience encouraged the implementation of multiple languages for this model. So, using this model as a marketing tool can benefit international brands and businesses. 

Real-time Processing 

The fact that BLIP allows for real-time processing of images makes it a great asset. Using BLIP image captioning as a marketing tool can benefit from this function. Live event coverage, chat support, social media engagement, and other marketing strategies can be implemented. 

Model Architecture of BLIP Image Captioning

BLIP Image Captioning employs a Vision-Language Pre-training (VLP) framework, integrating understanding and generation tasks. It effectively leverages noisy web data through a bootstrapping mechanism, where a captioner generates synthetic captions filtered by a noise removal process. 

This approach achieves state-of-the-art results in various vision-language tasks like image-text retrieval, image captioning, and Visual Question Answering (VQA). BLIP’s architecture enables flexible transferability between vision-language understanding and generation tasks. 

Notably, it demonstrates strong generalization ability in zero-shot transfers to video-language tasks. The model is pre-trained on the COCO dataset, which contains over 120,000 images and captions. BLIP’s innovative design and utilization of web data set it apart as a pioneering solution in unified vision-language understanding and generation.

BLIP uses the Vision Transformer ViT. This mechanism encodes the image input by dividing it into patches, with an additional token representing the global image feature. This process uses less computational costs, making it an easier model. 

This model uses a unique training/pretraining method to generate tasks and understand functionalities. BLIP adopts a multimodal mixture of Encoder and Decoder to transmit its main functionalities: Text Encoder, Image ground text encoder,  and decoder. 

  1. Text Encoder: This encoder uses Image-Text Contrastive Loss (ITC) to align text and image as a pair and make them have similar representations. This concept helps unimodal encoders better understand the semantic meaning of images and texts.
  2. Image-ground Text Encoder: This encoder uses Image-ground Matching Loss (IMT) to find an alignment between vision and language in this model. It acts as a filter for finding match positive pairs and unmatched negative pairs. 
  3. Image-ground Text Decoder: The decoder uses Language Modeling Loss (LM). This aims at generating text captions and image descriptions of an image. It is the LM that activates this decoder to predict accurate descriptions.  

Here is a graphical representation of how this works; 

BLIP Architecture
Source: Medium
BLIP Architecture
Source: Huggingface

Running this Model (GPU and CPU)

This model runs smoothly using several runtimes. Due to varying development environments, we run inferences on GPUs and CPUs to see how this model generates image captions. 

Let’s look into running the Salesforce BLIP Image captioning on GPU (In full precision)

Import the Module PIL

The first line allows HTTP requests in Python. Then, the PIL helps import the image module from the library, allowing the opening, changing, and saving of images in different formats. 

The next step is loading the processor from Salesforce/Blip image captioning. This is where the processor’s initialization starts. It is carried out by loading the pre-trained processor configuration and tokenization associated with this model.

import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large")

Image Download/upload

The variable ‘img_url’ indicates the image to be downloaded after using PIL’s image. In the open function, you can view the URL’s raw image after it has been downloaded. 

img_url = 'https://www.shutterstock.com/image-photo/young-happy-schoolboy-using-computer-600nw-1075168769.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')

When you enter a new code block and type ‘raw image,’ you will be able to get a view of the image as shown below:

IMage
Source: Shutterstock

Image Captioning Part 1

This model captions images in two ways: conditional and unconditional image captioning. For the former, the input is your raw image, text (which sends a request for the image caption based on the text), and then the ‘generate’ function gives out processed input. 

On the other hand, unconditional image captioning can provide captions without text input.

 # conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt")

out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))

# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt")

out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))

Let’s look into running the BLIP Image captioning on GPU (In half-precision)

Importing Necessary Libraries from Hugging Face Transformer and Processing Model and Processor Configuration 

This step imports the necessary libraries and requests in Python. The other steps include the BLIP image generation model and a processor for loading pre-trained configuration and tokenization.

import torch
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration


processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda")

Image URL

When you have the image URL, PIL can do the job from here, as opening the picture would be easy.

img_url = 'https://www.shutterstock.com/image-photo/young-happy-schoolboy-using-computer-600nw-1075168769.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')

Image Captioning Part 2

Here again, we talk about the conditional and unconditional image captioning methods and you can write something more than “a photography of” to get other information on the image. But for this case, we want just a caption;

# unconditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16)

out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))


# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16)

out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
#import csv

Let’s look into running the BLIP Image captioning on CPU runtime.

Importing Libraries

import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration

Loading the pre-trained Configuration 

processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large")

Image Input

img_url = 'https://www.shutterstock.com/image-photo/young-happy-schoolboy-using-computer-600nw-1075168769.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')

Image Captioning

# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt")


out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))


# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt")


out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))

Application of BLIP Image Captioning

The BLIP Image captioning model’s ability to generate captions from images provides great value to many industries, especially digital marketing. Let’s explore a few real-life applications of the BLIP image captioning model. 

  • Social Media Marketing: This tool can help social media marketers generate captions for images, boost accessibility on search engines (SEO), and increase engagement. 
  • Customer Support: User experience can be represented virtually, and this model can help as a support system to get faster results for users. 
  • Creators Caption Generations: With AI being used widely to generate content, bloggers and other creators would find this mode an effective tool for generating content while saving time. 

Conclusion

Image captioning has become a valuable development in AI today. This model helps in many ways with this development. Leveraging advanced natural language processing techniques, this setup equips developers with powerful tools for generating accurate captions from images.

Key Takeaways

Here are some notable points from the BLIP Image captioning model; 

  • Perfect Image Interpretations:
  • Image Context Understanding:
  • Real-life Applications:

Frequently Asked Questions

Q1. How does BLIP Image Captioning differ from traditional image captioning models?

Ans. BLIP image captioning model is not only accurate at detecting objects. Its understanding of spatial arrangement provides an edge contextually when giving the image caption. 

Q2. What are the key features of BLIP Image Captioning? 

Ans. This model satisfies a global audience as it supports multiple languages. BLIP Image captioning is also unique because it can process captions in real-time. 

Q3. How does this model handle conditional and unconditional captioning?

Ans. For conditional image captioning, BLIP provides captions to images using text prompts. On the other hand, this model can carry out unconditional captioning based on the image alone. 

Q4. What is the model architecture behind BLIP Image Captioning?

Ans. BLIP employs a Vision-Language Pre-training (VLP) framework, utilizing a bootstrapping mechanism to leverage noisy web data effectively. It achieves state-of-the-art results across various vision-language tasks.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Hey there! I'm David Maigari a dynamic professional with a passion for technical writing writing, Web Development, and the AI world. David is an also enthusiast of data science and AI innovations.

Responses From Readers

Clear

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details