Effective retrieval methods are paramount in an era where data is the new gold. This article introduces an innovative data extraction and processing approach. Dive into the world of txtai and Retrieval Augmented Generation (RAG), where complex data becomes easily navigable and insightful. By the end of this article, you will know how the fusion of txtai with RAG pipelines revolutionizes our interaction with large data sets, making data retrieval faster and smarter.
Also Read: What is Retrieval-Augmented Generation (RAG) in AI?
This article was published as a part of the Data Science Blogathon.
“Txtai” is an open-source Python package that uses Natural Language Processing (NLP) and Machine Learning to search, summarize, and analyze text data. It lets users quickly and effortlessly create powerful text-based applications without requiring extensive machine learning or data science knowledge.
With txtai, users can perform tasks such as document retrieval, keyword extraction, and text classification, making it a versatile tool for various text analysis needs.”
GitHub: https://github.com/neuml/txtai
Official Documentation of txtai: https://neuml.github.io/txtai/
txtai is an open-source library on GitHub with ~6K stars. Please see this guide for those who would like to contribute to txtai.
Take your AI innovations to the next level with GenAI Pinnacle. Fine-tune models like Gemini and unlock endless possibilities in NLP, image generation, and more. Dive in today! Explore Now
Some really exciting features of the open-source library txtai are:
Retrieval Augmented Generation (RAG) combines the strengths of large language models with information retrieval systems to enhance the accuracy and contextuality of generated responses.
RAG pipelines in txtai enable dynamic fetching of relevant data during the response generation process, ensuring that outputs are based on pre-trained knowledge and the most current and relevant information available.
Txtai’s architecture can seamlessly integrate with various data sources and models. This makes it a powerful tool for providing contextually rich responses, including RAG implementation. It uses language that is easy to understand, keeping the sentences short and straightforward.
The text is arranged logically, with the most significant details presented first. The vocabulary used is everyday language that is familiar to the reader. Additionally, the text uses active voice to increase clarity.
LLMs are popular in AI and machine learning, but they have a problem with hallucinations, which occur when the LLM generates factually incorrect output that seems plausible. RAG reduces this risk by limiting context with a vector search query.
It’s a practical and production-ready use case for Generative AI, and some companies are building their businesses around it. Txtai has question-answering pipelines that retrieve relevant context, and LLMs are used to analyze the context. RAG pipelines are a primary feature of txtai, and they are also a vector database. This feature is called the “all-in-one embedding database.” Open a Jupyter Notebook and follow the steps below. You can use Google Colab, which is free to use. The notebook shows how to build RAG pipelines with txtai.
Install txtai and its dependencies. As we use optional pipelines, we must install some extra pipeline packages.
%%capture
!pip install git+https://github.com/neuml/txtai#egg=txtai[pipeline] autoawq==0.1.5
# Download data sample for this tutorial
!wget -N https://github.com/neuml/txtai/releases/download/v6.2.0/tests.tar.gz
!tar -xvzf tests.tar.gz
# Install NLTK library
import nltk
nltk.download('punkt')
The LLM pipeline can load local LLM models from the Hugging Face Hub. If you’re using LLM API services like OpenAI or Cohere, you can replace this call with an API call.
#import LLM pipeline from txtai
from txtai.pipeline import LLM
# Create LLM pipeline
llm = LLM("TheBloke/Mistral-7B-OpenOrca-AWQ")
We’ll now load a document to query. Textractor can extract text from commonly used document formats like doc, pdf, and xlsx.
# import the Textractor instance
from txtai.pipeline import Textractor
# Create Textractor pipeline that extracts
# and splits text from documents
# Change the name of document as per your file name
textractor = Textractor()
texttr = textractor("txtai/document.docx")
print(texttr)
We’ll create a basic LLM pipeline by inputting a question and context (the entire file), generating a prompt, and running it through the LLM.
def execute(question, texttr):
prompt = f"""<|im_start|>system
You are a friendly assistant. You answer questions from users.<|im_end|>
<|im_start|>user
Answer the following question using only the context below. Only include
information specifically discussed.
question: {question}
context: {texttr} <|im_end|>
<|im_start|>assistant
"""
return llm(prompt, maxlength=4096, pad_token_id=32000)
execute("Tell me about txtai in one sentence", texttr)
execute("What model does txtai recommend for transcription?", text)
execute("I don't know anything about txtai, what would be the best thing /
to read?", text)
Generative AI is impressive. Even for those familiar with it, the language model’s understanding and quality of answers are astounding. Let’s explore scaling it to a larger set of documents.
When dealing with many documents, such as hundreds or thousands, putting them all into a single prompt can quickly exhaust GPU memory.
Retrieval augmented generation helps by using a query step to find the best candidates to add to the prompt.
This query usually employs vector search, but any search method that returns results can be used. Many production systems have tailored retrieval pipelines that supply context to LLM prompts.
This involves setting up a vector database of file content, where each paragraph is stored as a separate row.
import os
# import the embeddings package
from txtai import Embeddings
# create a pipeline stream
def stream(path):
for f in sorted(os.listdir(path)):
fpath = os.path.join(path, f)
# List of only accepted documents
if f.endswith(("docx", "xlsx", "pdf")):
print(f"Indexing {fpath}")
for paragraph in textractor(fpath):
yield paragraph
# Document text extraction and split into paragraphs
textractor = Textractor(paragraphs=True)
# Vector Database to index articles
embeddings = Embeddings(content=True)
embeddings.index(stream("txtai"))
This pipeline takes the input question, runs a vector search, and builds a context using the search results. You can then insert the context into a prompt template and run with the LLM models.
# write custom question to extract data
def context(question):
context = "\n".join(x["text"] for x in embeddings.search(question))
return context
def rag(question):
return execute(question, context(question))
rag("What model does txtai recommend for image captioning?")
output = rag("When was the BLIP model added for image captioning?")
print(output)
With vector search, we used a relevant portion of the documents to generate the answer, resulting in a similar output to the previous method.
When working with large volumes of data, it’s important only to include the most relevant context in the LLM prompt. Otherwise, the LLM may need to generate high-quality answers.
Implementing RAG with txtai can sometimes present challenges, such as integration complexities or performance issues. Common issues include difficulties in configuring the RAG pipeline with specific data sources and optimizing query response times.
To achieve this, it is necessary to fine-tune the model parameters, keep the data sources up-to-date, and experiment with different configurations until you find the optimal balance for your specific use case.
To effectively address any issues related to txtai, it is important to deeply understand its Documentation. This will provide valuable insights and examples that can help you optimize the performance and accuracy of txtai.
Alternatively, you can check any errors related to txtai from the list of Issues on their GitHub.
The development of RAG and txtai appears promising, with continuous improvements to enhance their capabilities. One can expect to integrate more advanced AI models and expand Txtai’s functionalities, opening new semantic search and data processing frontiers. Txtai is an open-source library that welcomes Contributions and offers an amazing learning opportunity.
Here is a quick summary of what you learned in today’s article:
Dive into the future of AI with GenAI Pinnacle. From training bespoke models to tackling real-world challenges like PII masking, empower your projects with cutting-edge capabilities. Start Exploring.
A. Txtai’s RAG pipelines use advanced language models to improve search accuracy by understanding query context and nuances, resulting in more relevant results.
A. Txtai’s RAG pipelines effectively handle large-scale data due to the use of vector search and optimized database indexing. However, scalability may depend on data complexity, computational resources, and pipeline configuration.
A. Integrating txtai’s RAG pipelines with existing data management systems is complex and requires custom development work for compatibility. Careful planning and understanding of the existing infrastructure are necessary. However, txtai is flexible and adaptable to various environments.
Do you have any questions?
You can ask questions in the comments below or connect with me. My social media accounts are below, and I promise I’ll do my best to answer them.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.