OpenAI’s API, developed by OpenAI, provides access to some of the most advanced language models available today. By leveraging this API and using LangChain & LlamaIndex, developers can integrate the power of these models into their own applications, products, or services. With just a few lines of code, you can tap into the vast knowledge and capabilities of OpenAI’s language models, opening up a world of exciting possibilities.
The core of OpenAI’s language models lies in the Large Language Model, or LLM for short. LLMs can generate human-like text and understand the context of complex language structures. By training on massive amounts of diverse data, LLM has acquired a remarkable ability to understand and generate contextually relevant text across various topics.
In this article, we will explore the exciting possibilities of,
This article was published as a part of the Data Science Blogathon.
Use these two open-source libraries to build applications that leverage the power of large language models (LLMs). LlamaIndex provides a simple interface between LLMs and external data sources, while LangChain provides a framework for building and managing LLM-powered applications. Even though both LlamaIndex and LangChain are still under development, they still have the potential to revolutionize the way we build applications.
First, let’s install the necessary libraries and import them.
!pip install llama-index==0.5.6
!pip install langchain==0.0.148
!pip install PyPDF2
from llama_index import SimpleDirectoryReader, GPTSimpleVectorIndex, LLMPredictor, ServiceContext
from langchain import OpenAI
import PyPDF2
import os
To begin using OpenAI’s API service, the first step is to sign up for an account. Once you have successfully signed up, you can create an API key specific to your account.
I recommend setting the API key as an environment variable to ensure seamless integration with your code and applications. Doing so lets you securely store and retrieve the API key within your environment without explicitly exposing it in your code. This practice helps maintain the confidentiality of your API key while ensuring easy accessibility when needed.
os.environ["OPENAI_API_KEY"] = “API KEY”
Let’s get the current working directory where the documents are residing and save it in a variable.
current_directory = os.getcwd()
Now we will create an object for the class LLMPredictor. LLMP Predictor accepts a parameter llm. Here we use a model called “text-davinci-003” from OpenAI’s API.
llm_predictor = LLMPredictor(llm=OpenAI(model_name="text-davinci-003"))
We can also provide several other optional parameters, such as
Next, we will create an object for the ServiceContext class. We initialize the ServiceContext class by using the from_defaults method, which initializes several commonly used keyword arguments, so we don’t need to define them separately.
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor)
In this case, we call the from_defaults method with the llm_predictor parameter, set to the previously created llm_predictor object. This sets the llm_predictor attribute of the ServiceContext instance to the llm_predictor object.
The next step is to iterate through each document present in the directory.
for filename in os.listdir(current_directory):
if os.path.isfile(os.path.join(current_directory, filename)):
We use the first line to iterate through each file in the current_directory, and the second line ensures that the files we iterate through are valid documents and not directories.
documents = SimpleDirectoryReader(input_files=[f"{filename}"]).load_data()
The SimpleDirectoryReader class reads data from a directory. It receives a parameter called input_files and dynamically generates a single filename using the filename variable, which is then passed to it.
The load_data method is called on the SimpleDirectoryReader instance. This method is responsible for loading the data from the specified input files and returning the loaded documents.
index = GPTSimpleVectorIndex.from_documents(documents, service_context=service_context)
The GPTSimpleVectorIndex class is designed to create an index for efficient search and retrieval of documents. We will call the from_documents method of the class with the below parameters:
Now we will construct our prompt. I am trying to extract the total number of cases registered under “Cyber Crimes.” Hence my prompt is something like this,
prompt = f"""
what is the total number of cases registered under Cyber Crimes?
"""
response = index.query(prompt)
print(response)
Now we will query the previously created index with our prompt by using the above line of code, resulting in a response like this.
We can rewrite the prompt to something like this to return the count only.
“What is the total number of cases registered under Cyber Crimes? return the integer result only”
Which will return the response like this,
We can also save the response to any data structure, for example, a dictionary. For that, first, create an empty dictionary. And assign the response to a particular key; in our case, we can assign it to the associated file name, the year of the crime, etc.
current_directory = os.getcwd()
def extract_data():
llm_predictor = LLMPredictor(llm=OpenAI(model_name="text-davinci-003"))
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor)
for filename in os.listdir(current_directory):
if os.path.isfile(os.path.join(current_directory, filename)):
documents = SimpleDirectoryReader(input_files=[f"{filename}"]).load_data()
index = GPTSimpleVectorIndex.from_documents(documents, service_context=service_context)
prompt = f"""
what is the total number of cases registered under Cyber Crimes.
return the integer result only
"""
response = index.query(prompt)
cyber_crimes[filename] = response.response
print(response)
In this article, we explored the exciting possibilities of using OpenAI’s API combined with LangChain and LlamaIndex to extract valuable information from PDF documents effortlessly.
The possibilities regarding leveraging the combined power of OpenAI’s API, LangChain, and LlamaIndex are limitless. Here, we only scratched the surface of what these tools can offer.
Furthermore, we can go a step further and instruct the model itself on how to format the response. For instance, if we prefer the output to be in a JSON object, we can easily specify this preference.
Language Models, like GPT-3.5 or LLMs, are powerful tools for various purposes. They can generate human-like text, assist in natural language understanding, and provide language-related tasks such as translation, summarization, and chatbot interactions.
Their ability to generate human-like text assists with language translation and understanding and offers valuable insights and information across various domains. LLMs can enhance productivity, facilitate natural language processing tasks, and assist in content creation and communication.
A. 1. Lack of Common Sense: Language models like LLMs often struggle with common sense reasoning and understanding context, leading to inaccurate or nonsensical responses.
2. Ethical Concerns: LLMs can potentially generate biased, offensive, or harmful content if not carefully monitored and regulated, raising ethical concerns regarding the responsible use of such models.
LLMs refer to a specific type of generative AI model that focuses on generating human-like text. Generative AI is a broader term encompassing various AI models that create new content or generate output based on input or predefined patterns. LLMs are a subset of generative AI models.
When selecting an LLM model for your use case:
1. Determine your requirements, such as text generation, language understanding, or tasks like translation or summarization.
2. Consider the size and capabilities of available models like GPT-3.5, GPT-3, or other variants to match your needs.
3. Evaluate the model’s performance metrics, including accuracy, fluency, and coherence, by reviewing documentation or running sample tests.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.