Building a Medical Prescription Scanner Using PaliGemma 2 Mix

Nibedita Dutta Last Updated : 06 Mar, 2025
9 min read

In today’s fast-paced business world, leveraging cutting-edge technology like Generative AI can significantly elevate operations of a business. Vision-language models, such as PaliGemma 2 Mix, offer businesses a powerful way to bridge the gap between visual and textual data. By combining the advanced SigLIP vision model and Gemma 2 language models, PaliGemma 2 Mix excels at tasks like image captioning, visual question answering, OCR, object detection, and segmentation, all with exceptional accuracy. What sets PaliGemma 2 Mix apart is its plug-and-play capability. Unlike previous models that required extensive fine-tuning, this tool is ready for immediate application across various tasks. Available in multiple configurations (3B, 10B, and 28B parameters) and resolutions (224×224 and 448×448), it gives us the flexibility to align computational power with specific business needs.  

Learning Objectives

  • Understand the architecture and key components of the PaliGemma 2 Mix model.
  • Explore the differences between PaliGemma 2 and SigLIP in vision-language processing.
  • Learn about the training datasets that power PaliGemma 2 Mix for multimodal tasks.
  • Discover the capabilities of PaliGemma 2 Mix in tasks like OCR, object detection, and image captioning.
  • Build a medical prescription scanner using PaliGemma 2 Mix in a hands-on Python tutorial.

This article was published as a part of the Data Science Blogathon.

Understanding PaliGemma 2 and Its Architecture

PaliGemma 2, released by Google in December 2024, was an iteration of the PaliGemma vision language model. PaliGemma 2 connects the powerful SigLIP image encoder with the Gemma 2 language model.

Key Components of PaliGemma 2

Let us understand the key components of PaliGemma 2:

  • Image Encoder From SigLIP: The image encoder from SigLIP is used for processing images in PaliGemma 2. The image encoder is pretrained on image-text pairs using contrastive learning based on the SigLIP procedure, which involves both a text and image encoder. The text encoder is discarded when integrating the image encoder into PaLI.  
  • Mapping Image Embeddings: The output embeddings from the visual encoder are mapped to the Gemma 2 input space using a linear projection.  
  • Merge Image Embeddings with Text Embeddings: The system combines visual embeddings with a text prompt and feeds them into the Gemma 2 language model, which then generates predictions by autoregressively sampling from the model.
  • Fine Tuning on Multimodal Tasks: In subsequent training stages, researchers train the model on various multimodal tasks, including captioning, visual question answering, and OCR at different resolutions (224px², 448px², and 896px²).

How is PaliGemma 2 Different from SigLIP?

SigLIP is a vision encoder that processes visual data, such as images or videos, by breaking them down into analyzable features. It extracts visual tokens from images and uses them for tasks like image classification, object detection, and OCR. SigLIP has evolved into SigLIP 2, which offers improved performance and new variants for dynamic resolution.

PaliGemma 2 is a vision-language model (VLM) that integrates the SigLIP vision encoder with the Gemma 2 language model. It combines visual and textual data to perform tasks such as image captioning, visual question answering, and OCR, leveraging both the SigLIP encoder for visual analysis and the Gemma 2 model for text understanding.

Training Data For PaliGemma 2

PaliGemma 2 has been trained on a wide range of datasets to support its diverse capabilities. These include WebLI, a multilingual image-text dataset for tasks like visual semantics and object localization; CC3M-35L, which features image-alt-text pairs in multiple languages; and VQ²A-CC3M-35L, a subset with question-answer pairs related to images. Additionally, it uses OpenImages for detection tasks and object-aware Q&A pairs, and WIT, a dataset derived from Wikipedia with images and corresponding text. Together, these datasets equip PaliGemma 2 for tasks such as image understanding and multilingual text interpretation.

PaliGemma 2 Mix and Its Key Differentiating Features 

Let us now explore PaliGemma 2 Mix below:

Finetuned Pali Gemma 2

While both models, PaliGemma 2 and PaliGemma 2 Mix, share a similar architecture, PaliGemma 2 Mix optimizes performance for immediate use across multiple tasks without requiring fine-tuning. This makes it more convenient for developers to quickly integrate vision-language capabilities into their applications.

PaliGemma 2 Mix is available in several variants, each differing in model size and input resolution. These variations allow users to choose the best model for their specific needs based on computational resources and task complexity.

Model Sizes:

  • 3B Parameters: Compact and resource-efficient, ideal for constrained environments.
  • 10B Parameters: A balanced option for mid-tier computational setups.
  • 28B Parameters: Designed for high-performance tasks with no latency constraints.

Resolutions:

  • 224×224: Suitable for tasks requiring less detailed visual analysis.
  • 448×448: Offers higher resolution for tasks needing more precise image processing.

Range of Tasks With PaliGemma 2 Mix

The PaliGemma 2 mix models are capable of handling a wide range of tasks. These tasks can be grouped into the following categories based on their subtasks:

  • Vision-language tasks: Answering questions about images, referencing visual content
  • Document comprehension: Answering questions about infographics, charts, and understanding diagrams
  • Text extraction from images: Detecting text, captioning images with embedded text, answering questions related to images containing text
  • Localization tasks: Detecting objects, performing image segmentation

Building a Medical Prescription Scanner using PaliGemma 2 Mix

In the following tutorial, we will create a query system to extract information from medical prescriptions using PaliGemma 2 Mix model. We will see how it performs on extracting information from different scanned doctor’s prescriptions. We can run the following code on Google colab with T4 GPU (free tier). The whole code is given in this Colab Notebook. 

Step1: Install Necessary Libraries

Let us install necessary libraries first.

!pip install -U bitsandbytes -U transformers -q

The code installs or updates two Python libraries, bitsandbytes and transformers. Bitsandbytes is a library that optimizes memory usage for machine learning models, especially for quantization tasks. The transformers model will be used for fetching the models from Hugging Face.

Step2: Import Necessary Libraries

Next step is to import all required libraries;

import torch
import pandas as pd
from transformers import PaliGemmaForConditionalGeneration, PaliGemmaProcessor, BitsAndBytesConfig
from transformers import BitsAndBytesConfig
from PIL import Image
from transformers.image_utils import load_image
import requests
from io import BytesIO

We import all the libraries needed to run the next blocks of code here.

Step3: Setting Hugging Face API Token

Since this model is in a gated repo on Hugging Face, we need to create a fine grained access token on Hugging Face and setting the “Read access to contents of all public gated repos you can access”.

import os
os.environ["HF_TOKEN"]=""

We can define this API token in the above code before running the next steps.

Step5: Loading the Model

 We load the model google/paligemma2-10b-mix-448 here which was fine-tuned on a mixture of academic tasks using 448×448 input images.  

model_id = "google/paligemma2-10b-mix-448" 
bnb_config = BitsAndBytesConfig(
    load_in_8bit=True,  # Change to load_in_4bit=True for even lower memory usage
    llm_int8_threshold=6.0,
)

# Load model with quantization
model = PaliGemmaForConditionalGeneration.from_pretrained(
    model_id, quantization_config=bnb_config
).eval()
processor = PaliGemmaProcessor.from_pretrained(model_id)

#Set the following for avoiding an error on " Dynamic control flow is not supported at the moment"

model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=False)

Step6: Loading the Image

We fetch a sample scanned document from a URL, convert it to RGB format if needed, and display it for processing.

# URL of the image
url = "https://assets.isu.pub/document-structure/230725104448-236aeacced7d7abcdafb3f9f2caf21c3/v1/a61879b5c46195fd5526fe6fe4e15fc8.jpeg"

# Send a GET request to the URL
response = requests.get(url)

# Check if the request was successful
if response.status_code == 200:
    # Load the image from the response content
    img = Image.open(BytesIO(response.content))
    img.show()
else:
    print("Failed to retrieve the image.")


def ensure_rgb(image: Image.Image) -> Image.Image:
    if image.mode != "RGB":
        image = image.convert("RGB")
    return image

We load this scanned document, which is a sample masked Aadhar document. Then we will try to extract the name from this document using the model.

inputimage;  PaliGemma 2 Mix
Link of Input Image

Step7: Querying from Scanned Document 

We create a text prompt, process the input image and text, and generate a response using the model to extract prescription details.

prompt = "Answer en Which medicines are recommended in the prescription"
model_inputs = processor(text=prompt, images=ensure_rgb(img), return_tensors="pt").to(torch.bfloat16).to(model.device)
input_len = model_inputs["input_ids"].shape[-1]

with torch.inference_mode():
    generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
    generation = generation[0][input_len:]
    decoded = processor.decode(generation, skip_special_tokens=True)
    print(decoded)

The above code processes an input prompt and an image, then feeds them into the model to generate a response. It first processes the text prompt along with the image, inputs them into the model, and generates a response based on the given context. Finally, it decodes the output and prints it as a readable text answer.

Output

output

As we can see from the output above, the medicine name has been extracted correctly from the document

Testing on other Queries

Query 2

# URL of the image
url = "https://ars.els-cdn.com/content/image/1-s2.0-S2468502X21000334-gr6.jpg"

# Send a GET request to the URL
response = requests.get(url)

# Check if the request was successful
if response.status_code == 200:
    # Load the image from the response content
    img = Image.open(BytesIO(response.content))
    img.show()
else:
    print("Failed to retrieve the image.")
    
prompt = "Answer en Which diseases are mentioned in the prescription"
model_inputs = processor(text=prompt, images=ensure_rgb(img), return_tensors="pt").to(torch.bfloat16).to(model.device)
input_len = model_inputs["input_ids"].shape[-1]

with torch.inference_mode():
    generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
    generation = generation[0][input_len:]
    decoded = processor.decode(generation, skip_special_tokens=True)
    print(decoded)

Input Image

query2:  PaliGemma 2 Mix
Link of Input Image

Output

output of query2

The output above shows that the model correctly extracted two diseases, Diabetes and Hypertension, from the document. However, it failed to extract “cholesterol” accurately.

Query 3

# URL of the image
url = "https://www.madeformedical.com/wp-content/uploads/2018/07/vio-4.jpg"

# Send a GET request to the URL
response = requests.get(url)

# Check if the request was successful
if response.status_code == 200:
    # Load the image from the response content
    img = Image.open(BytesIO(response.content))
    img.show()
else:
    print("Failed to retrieve the image.")
    
prompt = "Answer en Which medicines are mentioned in the prescription"
model_inputs = processor(text=prompt, images=ensure_rgb(img), return_tensors="pt").to(torch.bfloat16).to(model.device)
input_len = model_inputs["input_ids"].shape[-1]

with torch.inference_mode():
    generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
    generation = generation[0][input_len:]
    decoded = processor.decode(generation, skip_special_tokens=True)
    print(decoded)

Input Image

query3; PaliGemma 2 Mix
Link of Input Image

Output

output query3

The output above shows that the model extracted the medicine name from the document, but it misspelled “Ascorbic Acid” due to the way it was written in the prescription.

Query 4

# URL of the image
url = "https://img.apmcdn.org/7c0de3f557f29ea3ed7c6cc0a469f1a4c6a05e77/uncropped/a9e1ca-20061128-oldprescrip.jpg"

# Send a GET request to the URL
response = requests.get(url)

# Check if the request was successful
if response.status_code == 200:
    # Load the image from the response content
    img = Image.open(BytesIO(response.content))
    img.show()
else:
    print("Failed to retrieve the image.")
    
prompt = "Answer en Which medicines are mentioned in the prescription"
model_inputs = processor(text=prompt, images=ensure_rgb(img), return_tensors="pt").to(torch.bfloat16).to(model.device)
input_len = model_inputs["input_ids"].shape[-1]

with torch.inference_mode():
    generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
    generation = generation[0][input_len:]
    decoded = processor.decode(generation, skip_special_tokens=True)
    print(decoded)

Input Image

query4;  PaliGemma 2 Mix
Link of Input Image

Output

output query4

The output above shows that the model did not extract the medicine name correctly from the document. The prescription mentions “Aten-D Tablet,” but the unclear handwriting may have prevented the model from detecting it accurately.

Conclusion

In conclusion, Medical Prescription Scanner using PaliGemma 2 Mix offers businesses an advanced and versatile solution for bridging visual and textual data through its seamless integration of the SigLIP vision encoder and Gemma 2 language model. Its plug-and-play functionality eliminates the need for extensive fine-tuning, making it ideal for immediate deployment across a wide range of tasks, including image captioning, OCR, and object detection. With flexible configurations and resolutions, businesses can tailor Medical Prescription Scanner using PaliGemma 2 Mix to meet their specific needs, enhancing operational efficiency and enabling powerful multimodal applications.

Key Takeaways

  • PaliGemma 2 is a vision-language model (VLM) that integrates the SigLIP vision encoder with the Gemma 2 language model.
  • The model excels at various tasks like image captioning, OCR, visual question answering, object detection, and segmentation, offering exceptional accuracy with seamless integration.
  • Unlike previous models, PaliGemma 2 Mix doesn’t require fine-tuning, making it ready for immediate application in multiple tasks, saving time and effort for businesses.
  • PaliGemma 2 Mix is available in different model sizes (3B, 10B, and 28B parameters) and resolutions (224×224 and 448×448), allowing businesses to choose the best configuration for their specific needs.
  • The model can handle a wide range of tasks, from vision-language applications to document comprehension and text extraction, making it ideal for diverse industries like healthcare and automation.

Frequently Asked Questions

Q1. What is PaliGemma2 ?

A. PaliGemma 2 is an advanced vision-language model that integrates the SigLIP vision encoder with the Gemma 2 language model. It handles tasks like image captioning, visual question answering, OCR, object detection, and segmentation with exceptional accuracy, without requiring fine-tuning.

Q2. How does PaliGemma 2 Mix differ from previous models?

A. Unlike previous models that required extensive fine-tuning, PaliGemma 2 Mix is a plug-and-play solution, ready for immediate use across various tasks. This makes it faster and more convenient for businesses to implement.

Q3. What are the different configurations of PaliGemma 2 Mix?

A. PaliGemma 2 Mix comes in multiple configurations, including model sizes with 3B, 10B, and 28B parameters, and resolutions of 224×224 and 448×448. This allows businesses to choose the best setup based on computational resources and specific task complexity.

Q4. What types of tasks can PaliGemma 2 Mix handle?

A. PaliGemma 2 Mix is capable of handling a wide range of tasks, including vision-language tasks (like answering questions about images), document comprehension, text extraction from images, and localization tasks like object detection and image segmentation.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Nibedita completed her master’s in Chemical Engineering from IIT Kharagpur in 2014 and is currently working as a Senior Data Scientist. In her current capacity, she works on building intelligent ML-based solutions to improve business processes.

Responses From Readers

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details