Namaste! I am from India, where there are four seasons: winter, summer, monsoon, and autumn. Can you guess which season I hate most? It’s tax season.
This year, as usual, I scrambled to sift through various income tax sections and documents to maximize my savings (legally, of course, 😉). I watched countless videos and waded through documents, some in English, others in Hindi, hoping to find the answers I needed. But, with only two days left to file taxes, I realized I didn’t have time to go through it all. At that time, I wished there was a quick way to get answers, no matter the language!
Though RAG (Retrieval Augmented Generation) could do this, most tutorials and models only focused on English documents, leaving the non-English ones largely unsupported. That’s when it hit me — I could build an RAG pipeline tailored for Indian content, an RAG system that could answer questions by skimming through Hindi documents. And that’s how the journey began!
Notebook: If you are more of a notebook person, I have also uploaded the whole code to a Colab notebook. You can check it here. I recommend running it on a T4 GPU environment on Colab.
So let’s begin. Tudum!
This article was published as a part of the Data Science Blogathon.
The journey began with collecting the data, I started with some news articles and websites, related to income tax information in India, written in Hindi. It includes FAQs and unstructured text covering tax deduction sections, FAQs, and required forms. You can check them here:
urls =['https://www.incometax.gov.in/iec/foportal/hi/help/e-filing-itr1-form-sahaj-faq',
'https://www.incometax.gov.in/iec/foportal/hi/help/e-filing-itr4-form-sugam-faq',
'https://navbharattimes.indiatimes.com/business/budget/budget-classroom/income-tax-sections-know-which-section-can-save-how-much-tax-here-is-all-about-income-tax-law-to-understand-budget-speech/articleshow/89141099.cms',
'https://www.incometax.gov.in/iec/foportal/hi/help/individual/return-applicable-1',
'https://www.zeebiz.com/hindi/personal-finance/income-tax/tax-deductions-under-section-80g-income-tax-exemption-limit-how-to-save-tax-on-donation-money-to-charitable-trusts-126529'
]
Preparing the data involves the following steps:
Let’s look at each of them one by one
I will be using one of my favorite libraries to crawl websites — Markdown Crawler. You can install it using the command mentioned below. It parses the website into markdown format and stores them in markdown files.
!pip install markdown-crawler
!pip install markdownify
An interesting feature of Markdown Crawler is its ability to not only crawl the main web pages but also explore linked pages within the site, thanks to its depth parameters. This allows for more comprehensive website crawling. But in our case we do not need that, so depth will be zero.
Here is the function to crawl URLs
from markdown_crawler import md_crawl
def crawl_urls(urls: list, storage_folder_path: str, max_depth=0):
# Iterate over each URL in the list
for url in urls:
print(f"Crawling {url}") # Output the URL being crawled
# Crawl the URL and save the result in the specified folder
md_crawl(url, max_depth=max_depth, base_dir=storage_folder_path, is_links=True)
urls =['https://www.incometax.gov.in/iec/foportal/hi/help/e-filing-itr1-form-sahaj-faq',
'https://www.incometax.gov.in/iec/foportal/hi/help/e-filing-itr4-form-sugam-faq',
'https://navbharattimes.indiatimes.com/business/budget/budget-classroom/income-tax-sections-know-which-section-can-save-how-much-tax-here-is-all-about-income-tax-law-to-understand-budget-speech/articleshow/89141099.cms',
'https://www.incometax.gov.in/iec/foportal/hi/help/individual/return-applicable-1',
'https://www.zeebiz.com/hindi/personal-finance/income-tax/tax-deductions-under-section-80g-income-tax-exemption-limit-how-to-save-tax-on-donation-money-to-charitable-trusts-126529'
]
crawl_urls(urls= urls, storage_folder_path = './incometax_documents/')
#you do not need to make a folder intitially. Md Crawler handles that for you.\#import csv
This code will save the parsed Markdown files into the folder incometax_documents.
Next, we need to build a parser that reads the Markdown files and divides them into sections. If you’re working with different data that’s already processed, you can skip this step.
First, let’s write functions to extract content from a file. We’ll use the Python libraries markdown and BeautifulSoup for this. Below are the commands to install these libraries:
!pip install beautifulsoup4
!pip install markdown#import csv
import markdown
from bs4 import BeautifulSoup
def read_markdown_file(file_path):
"""Read a Markdown file and extract its sections as headers and content."""
# Open the markdown file and read its content
with open(file_path, 'r', encoding='utf-8') as file:
md_content = file.read()
# Convert markdown to HTML
html_content = markdown.markdown(md_content)
# Parse HTML content
soup = BeautifulSoup(html_content, 'html.parser')
sections = []
current_section = None
# Loop through HTML tags
for tag in soup:
# Start a new section if a header tag is found
if tag.name and tag.name.startswith('h'):
if current_section:
sections.append(current_section)
current_section = {'header': tag.text, 'content': ''}
# Add content to the current section
elif current_section:
current_section['content'] += tag.get_text() + '\n'
# Add the last section
if current_section:
sections.append(current_section)
return sections
#lets look at the output of one of the files:
sections = read_markdown_file('./incometax_documents/business-budget-budget-classroom-income-tax-sections-know-which-section-can-save-how-much-tax-here-is-all-about-income-tax-law-to-understand-budget-speech-articleshow-89141099-cms.md')
The content looks cleaner now, but some sections are unnecessary, especially those with empty headers. To fix this, let’s write a function that passes a section only if both the header and content are non-empty, and the header isn’t in the list [‘main navigation’, ‘navigation’, ‘footer’].
def pass_section(section):
# List of headers to ignore based on experiments
headers_to_ignore = ['main navigation', 'navigation', 'footer', 'advertisement']
# Check if the header is not in the ignore list and both header and content are non-empty
if section['header'].lower() not in headers_to_ignore and section['header'].strip() and section['content'].strip():
return True
return False
#storing everything in passed sections
passed_sections = []
import os
# Iterate through all Markdown files in the folder
for filename in os.listdir('incometax_documents'):
if filename.endswith('.md'):
file_path = os.path.join('incometax_documents', filename)
# Extract sections from the current Markdown file
sections = read_markdown_file(file_path)
passed_sections.extend(sections)
The content looks organized and clean now! and all the sections are stored in passed_sections.
Note: You may need chunking based on content as the token limit for the embedding model is 512. But, since the sections are small for my case, I will skip it. But you can still check the notebook, for chunking code.
We will be using open-source multilingual-E5 as our embedding model and Airavata by ai4Bharata, an Indic LLM that is an instruction-tuned version of OpenHathi, a 7B parameter model by Sarvam AI, based on Llama2 and trained on Hindi, English, and Hinglish as the generation model.
Why did I choose multilingual-e5-base as embedding model?According to its Hugging Face page, it supports 100 languages, though performance for low-resource languages may vary. I’ve found it performs reasonably well for Hindi. For higher accuracy, BGE M3 is an option, but it’s resource-intensive. OpenAI embeddings could also work, but for now, we’re sticking with open-source solutions. Therefore, E5 is a lightweight and effective choice.Why Airavata?Although giant LLMs like GPT 3.5 could do the job but let’s just say I wanted to try something open-source and Indian.
I chose Chroma DB as I could use it in Google Collab without any hosting and it’s good for experimentation. But you could also use vector stores of your choice. Here’s how you install it.
!pip install chromadb
We can then initiate the chromaDb client with the following commands
import chromadb
chroma_client = chromadb.Client()
This way to initiate Chroma DB creates an in-memory instance of Chroma. This is useful for testing and development, but not recommended for production use. For production you should host it, Please refer to its documentation for details.
Next, we need to create a vector store. Fortunately, Chroma DB offers built-in support for open-source sentence transformers. Here’s how to use it:
from chromadb.utils import embedding_functions
#initializing embedding model
sentence_transformer_ef = embedding_functions.SentenceTransformerEmbeddingFunction(model_name="intfloat/multilingual-e5-base")
#creating a collection
collection = chroma_client.create_collection(name="income_tax_hindi", embedding_function= sentence_transformer_ef, metadata={"hnsw:space": "cosine"})
We use metadata={“hnsw:space”: “cosine”} because ChromaDB’s default distance is Euclidean, but cosine distance is typically preferred for RAG purposes.
In chromaDb, we cannot create a collection with the same name if it already exists. So, While experimenting you might need to delete the collection to recreate it, here’s the command for deletion:
# command for deletion
chroma_client.delete_collection(name="income_tax_hindi")
Now that we’ve stored the data in the passed_sections , it’s time to ingest this content in ChromaDB. We’ll also include metadata and IDs. Metadata is optional, but since we have headers, let’s keep them for added context.
#ingestion documents
collection.add(
documents=[section['content'] for section in passed_sections],
metadatas = [{'header': section['header']} for section in passed_sections],
ids=[str(i) for _ in range(len(passed_sections))]
)
#apparently we need to pass some ids to documents in chroma db, hence using id
It’s about time, let’s start querying the vector store.
docs = collection.query(
query_texts=["सेक्शन 80 C की लिमिट क्या होती है"],
n_results=3
)
print(docs)
As you can see we have got relevant documents based on cosine distances. Let’s try to generate an answer using this. For that, we would need an LLM.
As mentioned, we will be using Airavta, and since it is open-source we will be using transformers and quantization techniques to load the model. You can check more about ways to load open-source LLMs here and here. A T4 GPU environment is needed in collab to run this.
Let’s start with installing the relevant libraries
!pip install bitsandbytes>=0.39.0
!pip install --upgrade accelerate transformers
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
device = "cuda" if torch.cuda.is_available() else "cpu"
print(device)
# it should print Cuda
Here is the code to load the quantized model.
model_name = "ai4bharat/Airavata"
tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side="left")
tokenizer.pad_token = tokenizer.eos_token
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
model = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=quantization_config, torch_dtype=torch.bfloat16)
The model has been fine-tuned to follow instructions and it works best when instructions are in the same format as that of training data. So we will be writing a function to organize everything in an apt format.
The functions below might seem overwhelming, but they are from the model’s official Hugging Face page. Such functions are available for most open-source models, so don’t worry if you don’t fully understand them.
def create_prompt_with_chat_format(messages, bos="<s>", eos="</s>", add_bos=True):
formatted_text = ""
for message in messages:
if message["role"] == "system":
formatted_text += "<|system|>\n" + message["content"] + "\n"
elif message["role"] == "user":
formatted_text += "<|user|>\n" + message["content"] + "\n"
elif message["role"] == "assistant":
formatted_text += "<|assistant|>\n" + message["content"].strip() + eos + "\n"
else:
raise ValueError(
"Tulu chat template only supports 'system', 'user' and 'assistant' roles. Invalid role: {}.".format(
message["role"]
)
)
formatted_text += "<|assistant|>\n"
formatted_text = bos + formatted_text if add_bos else formatted_text
return formatted_text
For inference, we will use this function
def inference(input_prompts, model, tokenizer):
input_prompts = [
create_prompt_with_chat_format([{"role": "user", "content": input_prompt}], add_bos=False)
for input_prompt in input_prompts
]
encodings = tokenizer(input_prompts, padding=True, return_tensors="pt")
encodings = encodings.to(device)
with torch.inference_mode():
outputs = model.generate(encodings.input_ids, do_sample=False, max_new_tokens=1024)
output_texts = tokenizer.batch_decode(outputs.detach(), skip_special_tokens=True)
input_prompts = [
tokenizer.decode(tokenizer.encode(input_prompt), skip_special_tokens=True) for input_prompt in input_prompts
]
output_texts = [output_text[len(input_prompt) :] for input_prompt, output_text in zip(input_prompts, output_texts)]
return output_texts
Now the interesting part: prompt to generate the answer. Here, we create a prompt that instructs the language model to generate answers based on specific guidelines. The instructions are simple: first, the model reads and understands the question, then reviews the context provided. It uses this information to craft a clear, concise, and accurate response. If you look at it carefully, this is the Hindi version of the typical RAG prompt.
The instructions are in Hindi because the Airavta model has been fine-tuned to follow instructions given in Hindi language. You can read more about its training here.
prompt ='''आप एक बड़े भाषा मॉडल हैं जो दिए गए संदर्भ के आधार पर सवालों का उत्तर देते हैं। नीचे दिए गए निर्देशों का पालन करें:
1. **प्रश्न पढ़ें**:
- दिए गए सवाल को ध्यान से पढ़ें और समझें।
2. **संदर्भ पढ़ें**:
- नीचे दिए गए संदर्भ को ध्यानपूर्वक पढ़ें और समझें।
3. **सूचना उत्पन्न करना**:
- संदर्भ का उपयोग करते हुए, प्रश्न का विस्तृत और स्पष्ट उत्तर तैयार करें।
- यह सुनिश्चित करें कि उत्तर सीधा, समझने में आसान और तथ्यों पर आधारित हो।
### उदाहरण:
**संदर्भ**:
"नई दिल्ली भारत की राजधानी है और यह देश का प्रमुख राजनीतिक और प्रशासनिक केंद्र है। यह शहर ऐतिहासिक स्मारकों, संग्रहालयों और विविध संस्कृति के लिए जाना जाता है।"
**प्रश्न**:
"भारत की राजधानी क्या है और यह क्यों महत्वपूर्ण है?"
**प्रत्याशित उत्तर**:
"भारत की राजधानी नई दिल्ली है। यह देश का प्रमुख राजनीतिक और प्रशासनिक केंद्र है और ऐतिहासिक स्मारकों, संग्रहालयों और विविध संस्कृति के लिए जाना जाता है।"
### निर्देश:
अब, दिए गए संदर्भ और प्रश्न का उपयोग करके उत्तर दें:
**संदर्भ**:
{docs}
**प्रश्न**:
{query}
उत्तर:'''
Combining it all the function becomes:
def generate_answer(query):
docs = collection.query(
query_texts=[query],
n_results=3
) #taking top 3 results
docs = [doc for doc in docs['documents'][0]]
docs = "\n".join(docs)
formatted_prompt = prompt.format(docs = docs,query = query)
answers = inference([formatted_prompt], model, tokenizer)
return answers[0]
Let’s try it out for some questions:
questions = [
'सेक्शन 80डीडी के तहत विकलांग आश्रित के लिए कौन से मेडिकल खर्च पर टैक्स छूट मिल सकती है?',
'क्या सेक्शन 80यू और सेक्शन 80डीडी का लाभ एक साथ उठाया जा सकता है?',
'सेक्शन 80 C की लिमिट क्या होती है?'
]
for question in questions:
answer = generate_answer(question)
print(f"Question: {question}\nAnswer: {answer}\n")
#OUTPUT
Question: सेक्शन 80डीडी के तहत विकलांग आश्रित के लिए कौन से मेडिकल खर्च पर टैक्स छूट मिल सकती है?
Answer: आश्रित के लिए टैक्स छूट उन खर्चों पर उपलब्ध है जो 40 फीसदी से अधिक विकलांगता वाले व्यक्ति के लिए आवश्यक हैं। इन खर्चों में अस्पताल में भर्ती होना, सर्जरी, दवाएं और चिकित्सा उपकरण शामिल हैं।
Question: क्या सेक्शन 80यू और सेक्शन 80डीडी का लाभ एक साथ उठाया जा सकता है?
Answer: नहीं।
Question: सेक्शन 80 C की लिमिट क्या होती है?
Answer: सेक्शन 80सी की सीमा 1.5 लाख रुपये है।
Nice answers! You can try experimenting with prompts as well, to return detailed or short answers or change the tone of the model. I would love to see your experiments. 😊
That’s the end of the blog! I hope you enjoyed it. In this post, we took income tax-related information from a website, ingested it into ChromaDB using a multilingual open-source transformer, and generated answers with an open-source Indic LLM.
I was a bit unsure about what details to include, but I’ve tried to keep it concise. If you’d like more information, feel free to check out my GitHub repo. I’d love to hear your feedback — whether you think something else should have been included or if this was good as is. See you soon, or as we say in Hindi, फिर मिलेंगे!
Developing a RAG pipeline tailored for Indian languages demonstrates the growing capabilities of Indic LLMs in addressing complex, multilingual needs. Indic LLMs empower organizations to process Hindi and other regional documents more accurately, ensuring information accessibility across diverse linguistic backgrounds. As we refine these models, the impact of Indic LLMs on local language applications will only increase, providing new avenues for improved comprehension, retrieval, and response generation in native languages. This innovation marks an exciting step forward for natural language processing in India and beyond.
A. Use a T4 GPU environment in Google Colab for optimal performance with the LLM model and vector store. This setup handles quantized models and heavy processing requirements efficiently.
A. Yes, while this example uses Hindi, you can adjust it for other languages supported by multilingual embedding models and appropriately tuned LLMs.
A. ChromaDB is recommended for in-memory operations in Colab, but other vector databases like Pinecone or Faiss are also compatible, especially in production.
A. We used multilingual E5 for embeddings and Airavata for text generation.
E5 supports multiple languages, and Airavata is fine-tuned for Hindi, making them suitable for our Hindi-based application.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.