In today’s fast-paced business world, leveraging cutting-edge technology like Generative AI can significantly elevate operations of a business. Vision-language models, such as PaliGemma 2 Mix, offer businesses a powerful way to bridge the gap between visual and textual data. By combining the advanced SigLIP vision model and Gemma 2 language models, PaliGemma 2 Mix excels at tasks like image captioning, visual question answering, OCR, object detection, and segmentation, all with exceptional accuracy. What sets PaliGemma 2 Mix apart is its plug-and-play capability. Unlike previous models that required extensive fine-tuning, this tool is ready for immediate application across various tasks. Available in multiple configurations (3B, 10B, and 28B parameters) and resolutions (224×224 and 448×448), it gives us the flexibility to align computational power with specific business needs.
This article was published as a part of the Data Science Blogathon.
PaliGemma 2, released by Google in December 2024, was an iteration of the PaliGemma vision language model. PaliGemma 2 connects the powerful SigLIP image encoder with the Gemma 2 language model.
Let us understand the key components of PaliGemma 2:
SigLIP is a vision encoder that processes visual data, such as images or videos, by breaking them down into analyzable features. It extracts visual tokens from images and uses them for tasks like image classification, object detection, and OCR. SigLIP has evolved into SigLIP 2, which offers improved performance and new variants for dynamic resolution.
PaliGemma 2 is a vision-language model (VLM) that integrates the SigLIP vision encoder with the Gemma 2 language model. It combines visual and textual data to perform tasks such as image captioning, visual question answering, and OCR, leveraging both the SigLIP encoder for visual analysis and the Gemma 2 model for text understanding.
PaliGemma 2 has been trained on a wide range of datasets to support its diverse capabilities. These include WebLI, a multilingual image-text dataset for tasks like visual semantics and object localization; CC3M-35L, which features image-alt-text pairs in multiple languages; and VQ²A-CC3M-35L, a subset with question-answer pairs related to images. Additionally, it uses OpenImages for detection tasks and object-aware Q&A pairs, and WIT, a dataset derived from Wikipedia with images and corresponding text. Together, these datasets equip PaliGemma 2 for tasks such as image understanding and multilingual text interpretation.
Let us now explore PaliGemma 2 Mix below:
While both models, PaliGemma 2 and PaliGemma 2 Mix, share a similar architecture, PaliGemma 2 Mix optimizes performance for immediate use across multiple tasks without requiring fine-tuning. This makes it more convenient for developers to quickly integrate vision-language capabilities into their applications.
PaliGemma 2 Mix is available in several variants, each differing in model size and input resolution. These variations allow users to choose the best model for their specific needs based on computational resources and task complexity.
Model Sizes:
Resolutions:
The PaliGemma 2 mix models are capable of handling a wide range of tasks. These tasks can be grouped into the following categories based on their subtasks:
In the following tutorial, we will create a query system to extract information from medical prescriptions using PaliGemma 2 Mix model. We will see how it performs on extracting information from different scanned doctor’s prescriptions. We can run the following code on Google colab with T4 GPU (free tier). The whole code is given in this Colab Notebook.
Let us install necessary libraries first.
!pip install -U bitsandbytes -U transformers -q
The code installs or updates two Python libraries, bitsandbytes and transformers. Bitsandbytes is a library that optimizes memory usage for machine learning models, especially for quantization tasks. The transformers model will be used for fetching the models from Hugging Face.
Next step is to import all required libraries;
import torch
import pandas as pd
from transformers import PaliGemmaForConditionalGeneration, PaliGemmaProcessor, BitsAndBytesConfig
from transformers import BitsAndBytesConfig
from PIL import Image
from transformers.image_utils import load_image
import requests
from io import BytesIO
We import all the libraries needed to run the next blocks of code here.
Since this model is in a gated repo on Hugging Face, we need to create a fine grained access token on Hugging Face and setting the “Read access to contents of all public gated repos you can access”.
import os
os.environ["HF_TOKEN"]=""
We can define this API token in the above code before running the next steps.
We load the model google/paligemma2-10b-mix-448 here which was fine-tuned on a mixture of academic tasks using 448×448 input images.
model_id = "google/paligemma2-10b-mix-448"
bnb_config = BitsAndBytesConfig(
load_in_8bit=True, # Change to load_in_4bit=True for even lower memory usage
llm_int8_threshold=6.0,
)
# Load model with quantization
model = PaliGemmaForConditionalGeneration.from_pretrained(
model_id, quantization_config=bnb_config
).eval()
processor = PaliGemmaProcessor.from_pretrained(model_id)
#Set the following for avoiding an error on " Dynamic control flow is not supported at the moment"
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=False)
We fetch a sample scanned document from a URL, convert it to RGB format if needed, and display it for processing.
# URL of the image
url = "https://assets.isu.pub/document-structure/230725104448-236aeacced7d7abcdafb3f9f2caf21c3/v1/a61879b5c46195fd5526fe6fe4e15fc8.jpeg"
# Send a GET request to the URL
response = requests.get(url)
# Check if the request was successful
if response.status_code == 200:
# Load the image from the response content
img = Image.open(BytesIO(response.content))
img.show()
else:
print("Failed to retrieve the image.")
def ensure_rgb(image: Image.Image) -> Image.Image:
if image.mode != "RGB":
image = image.convert("RGB")
return image
We load this scanned document, which is a sample masked Aadhar document. Then we will try to extract the name from this document using the model.
We create a text prompt, process the input image and text, and generate a response using the model to extract prescription details.
prompt = "Answer en Which medicines are recommended in the prescription"
model_inputs = processor(text=prompt, images=ensure_rgb(img), return_tensors="pt").to(torch.bfloat16).to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
The above code processes an input prompt and an image, then feeds them into the model to generate a response. It first processes the text prompt along with the image, inputs them into the model, and generates a response based on the given context. Finally, it decodes the output and prints it as a readable text answer.
Output
As we can see from the output above, the medicine name has been extracted correctly from the document
Query 2
# URL of the image
url = "https://ars.els-cdn.com/content/image/1-s2.0-S2468502X21000334-gr6.jpg"
# Send a GET request to the URL
response = requests.get(url)
# Check if the request was successful
if response.status_code == 200:
# Load the image from the response content
img = Image.open(BytesIO(response.content))
img.show()
else:
print("Failed to retrieve the image.")
prompt = "Answer en Which diseases are mentioned in the prescription"
model_inputs = processor(text=prompt, images=ensure_rgb(img), return_tensors="pt").to(torch.bfloat16).to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
Input Image
Output
The output above shows that the model correctly extracted two diseases, Diabetes and Hypertension, from the document. However, it failed to extract “cholesterol” accurately.
Query 3
# URL of the image
url = "https://www.madeformedical.com/wp-content/uploads/2018/07/vio-4.jpg"
# Send a GET request to the URL
response = requests.get(url)
# Check if the request was successful
if response.status_code == 200:
# Load the image from the response content
img = Image.open(BytesIO(response.content))
img.show()
else:
print("Failed to retrieve the image.")
prompt = "Answer en Which medicines are mentioned in the prescription"
model_inputs = processor(text=prompt, images=ensure_rgb(img), return_tensors="pt").to(torch.bfloat16).to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
Input Image
Output
The output above shows that the model extracted the medicine name from the document, but it misspelled “Ascorbic Acid” due to the way it was written in the prescription.
Query 4
# URL of the image
url = "https://img.apmcdn.org/7c0de3f557f29ea3ed7c6cc0a469f1a4c6a05e77/uncropped/a9e1ca-20061128-oldprescrip.jpg"
# Send a GET request to the URL
response = requests.get(url)
# Check if the request was successful
if response.status_code == 200:
# Load the image from the response content
img = Image.open(BytesIO(response.content))
img.show()
else:
print("Failed to retrieve the image.")
prompt = "Answer en Which medicines are mentioned in the prescription"
model_inputs = processor(text=prompt, images=ensure_rgb(img), return_tensors="pt").to(torch.bfloat16).to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
Input Image
Output
The output above shows that the model did not extract the medicine name correctly from the document. The prescription mentions “Aten-D Tablet,” but the unclear handwriting may have prevented the model from detecting it accurately.
In conclusion, Medical Prescription Scanner using PaliGemma 2 Mix offers businesses an advanced and versatile solution for bridging visual and textual data through its seamless integration of the SigLIP vision encoder and Gemma 2 language model. Its plug-and-play functionality eliminates the need for extensive fine-tuning, making it ideal for immediate deployment across a wide range of tasks, including image captioning, OCR, and object detection. With flexible configurations and resolutions, businesses can tailor Medical Prescription Scanner using PaliGemma 2 Mix to meet their specific needs, enhancing operational efficiency and enabling powerful multimodal applications.
A. PaliGemma 2 is an advanced vision-language model that integrates the SigLIP vision encoder with the Gemma 2 language model. It handles tasks like image captioning, visual question answering, OCR, object detection, and segmentation with exceptional accuracy, without requiring fine-tuning.
A. Unlike previous models that required extensive fine-tuning, PaliGemma 2 Mix is a plug-and-play solution, ready for immediate use across various tasks. This makes it faster and more convenient for businesses to implement.
A. PaliGemma 2 Mix comes in multiple configurations, including model sizes with 3B, 10B, and 28B parameters, and resolutions of 224×224 and 448×448. This allows businesses to choose the best setup based on computational resources and specific task complexity.
A. PaliGemma 2 Mix is capable of handling a wide range of tasks, including vision-language tasks (like answering questions about images), document comprehension, text extraction from images, and localization tasks like object detection and image segmentation.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.