Step into the forefront of language processing! In a realm where language is an essential link between humanity and technology, the strides made in Natural Language Processing have unlocked some extraordinary heights. Within this progress lies the groundbreaking Large Language Model, a transformative force reshaping our interactions with text-based information. In this comprehensive learning journey, you’ll delve into the intricacies of LangChain, a cutting-edge tool reshaping how we interact with text-based information. Did you ever wonder what is chain is the “Langchain”?
LangChain stands alone as a gateway to the most dynamic field of Large Language Models, which offers a profound understanding of how these models transform the raw inputs into refined and human-like responses. Through this exploration, you will unravel the essential building blocks of LangChain, from LLMChains and Sequential Chains to the intricate workings of Router Chains. Also, you can learn about the langchain llm models.
This article was published as a part of the Data Science Blogathon.
A Large Language Model (LLM) refers to a type of artificial intelligence designed to understand and generate human-like text. These models, like OpenAI’s GPT-3.5, train on extensive text data to understand the patterns and structures of human language. They can perform various language-related tasks, including translation, content creation, answering questions, and more.
LLMs are valuable tools in natural language processing and have applications in areas like chatbots, content generation, and language translation services.
Before we unravel the intricacies of LangChain Chains, let’s grab the essence of LangChain itself. LangChain is a robust library designed to simplify interactions with various large language model (LLM) providers, including OpenAI, Cohere, Bloom, Huggingface, and others. What sets LangChain apart is its unique feature: the ability to create Chains, and logical connections that help in bridging one or multiple LLMs.
LangChain provides limitless opportunities, limited only by your imagination.
With LangChain, you have the power to create extraordinary experiences. The potential to transform these ideas into reality is right at your fingertips.
Central to LangChain is a vital component known as LangChain Chains, forming the core connection among one or several large language models (LLMs).
In certain sophisticated applications, it becomes necessary to chain LLMs together, either with each other or with other elements. These Chains empower us to integrate numerous components, weaving them into a cohesive application. Let’s delve deeper into the distinct types of Chains.
Moreover, the structured approach offered by Chains in LLM ensures flawless and effective processing, paving the way for the development of advanced applications tailored to a wide array of user requirements. This represents a significant advancement in the realm of natural language processing, as these intricate connections serve as the fundamental framework of LangChain, facilitating seamless interactions among multiple Large Language Models (LLMs).
Chains are invaluable due to their capacity to effortlessly blend diverse components, shaping a singular and coherent application. Through the creation of chains, multiple elements can seamlessly come together. Imagine this scenario: a chain is crafted to take in user input, polish it using a PromptTemplate, and subsequently pass on this refined response to a large language model (LLM). This streamlined process not only simplifies but also enriches the overall functionality of the system. In essence, chains serve as the linchpin, seamlessly connecting different parts of the application and enhancing its capabilities. Let’s summarize this:
Furthermore, chains provide us with the ability to build complex applications by linking multiple chains together or by incorporating chains with other vital elements. This approach enables a sophisticated and nuanced method for developing applications, allowing for intricate and advanced functionalities.
There are many different Chains in Langchain that we can use. Here, we are going through three of the fundamental chains – LLM Chain, Sequential Chain and Router Chain.
The most basic form of chain within this system is the LLMChain, widely recognized and fundamental. Its operation involves a structured arrangement, including a PromptTemplate, an OpenAI model (either a Large Language Model or a ChatModel), and an optional output parser. Within this setup, the LLMChain accepts various input parameters. It employs the PromptTemplate to transform these inputs into a coherent prompt. This polished prompt is then inputted into the model. After receiving the output, the LLMChain uses the OutputParser, if provided, to further refine and format the result into its ultimate usable form. To illustrate the functionality of LLM chains, consider the concrete example.A concrete example illustrating the functionality of LLM chains is detailed below:
Creating Chains, especially LLM Chains, is a meticulous endeavor, requiring the harnessing of Large Language Models in LangChain. These chains serve as intricate channels, facilitating the smooth exchange of information and engagement. Through careful structuring, developers can design vibrant applications capable of understanding user inputs, utilizing LLMs to generate intelligent responses, and customizing the output to meet specific needs effectively.
Now let’s look deeper into how we can use the LLM Chains in the Langchain.
import langchain
import openai
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from getpass import getpass
OPENAI_API_KEY = getpass()
We initialize the OpenAI Large Language Model with specific parameters, including a temperature of 0.9, which affects the diversity of generated responses. Furthermore, users must define a ‘PromptTemplate’ to input a variable (in this case, “product”) and create a standardized prompt structure. At runtime, the placeholder ‘{product}’ can be dynamically populated with different product names.
llm = OpenAI(temperature=0.9,
openai_api_key=OPENAI_API_KEY
)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
We create an instance of the ‘LLMChain’ class, using a predefined OpenAI Large Language Model and a specified prompt template. Now, we have the capability to apply the chain to a product such as a “gaming laptop” using the chain.run command. This means the chain can dynamically process and generate responses tailored to this specific product input.
from langchain.chains import LLMChain
chain = LLMChain(llm=llm, prompt=prompt, verbose=True)
print(chain.run("gaming laptop"))
Output:
Based on this we get the name of a company called “GamerTech Laptops”.
A sequential chain is a chain that combines various individual chains, where the output of one chain serves as the input for the next in a continuous sequence. It operates by running a series of chains consecutively.
There are two types of sequential chains:
Simple Sequential Chain, which handles a single input and output, and
Sequential Chain, manage multiple inputs and outputs simultaneously.
Sequential chains, in their simplest form, consist of steps where each step takes one input and produces one output. The output from one step becomes the input for the next.
This straightforward approach is effective when dealing with sub-chains designed for singular inputs and outputs. It ensures a smooth and continuous flow of information, with each step seamlessly passing its output to the subsequent step.
Simple Sequential Chains allow for a single input to undergo a series of coherent transformations, resulting in a refined output. This sequential approach ensures systematic and efficient handling of data, making it ideal for scenarios where a linear flow of information processing is essential
from langchain.llms import OpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain.prompts import ChatPromptTemplate
from langchain.chains import SimpleSequentialChain
We initialize an OpenAI Large Language Model with a temperature setting of 0.7 and an API key. Then, we create a specific chat prompt template with a placeholder for a product name. Subsequently, we form an LLMChain, which allows the generation of responses based on the provided prompt. We repeat this process for two different chains.
# This is an LLMChain to write first chain.
llm = OpenAI(temperature=0.7, openai_api_key=OPENAI_API_KEY)
first_prompt = ChatPromptTemplate.from_template(
"What is the best name to describe a company that makes {product}?"
)
chain_one = LLMChain(llm=llm, prompt=first_prompt)
# This is an LLMChain to write second chain.
llm = OpenAI(temperature=0.7, openai_api_key=OPENAI_API_KEY)
second_prompt = ChatPromptTemplate.from_template(
"Write a 20 words description for the following company:{company_name}"
)
chain_two = LLMChain(llm=llm, prompt=second_prompt)
Create an overall Simple Sequential Chain, comprising two different individual chains, chain_one, and chain_two. Execute this with the input “gaming laptop,” it sequentially processes the input through the defined chains and provides an output, that demonstrates the step-by-step sequential execution of the chains.
overall_simple_chain = SimpleSequentialChain(chains=[chain_one, chain_two],
verbose=True
)
overall_simple_chain.run("gaming laptop")
Output:
Not all of the sequential chains operate with a single string input and output. In more intricate setups, these chains handle multiple inputs and generate multiple final outputs. The careful naming of input and output variables holds important significance in these complex chains.
A more general form of sequential chains allows for multiple inputs/outputs. Any step in the chain can take in multiple inputs.
from langchain.llms import OpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain.prompts import ChatPromptTemplate
from langchain.chains import SequentialChain
llm = OpenAI(temperature=0.7, openai_api_key=OPENAI_API_KEY)
We define a prompt template, instructing the system to perform a specific task. We then create a corresponding LLMChain, using the designated Large Language Model (LLM) and the defined prompt template. The chain is set up to take the input, pass it through the LLM, and generate the output. We repeat this process to establish four distinct chains.
Review = "Les ordinateurs portables GamersTech impressionne par ses
performances exceptionnelles et son design élégant. De sa configuration
matérielle robuste à un clavier RVB personnalisable et un système de
refroidissement efficace, il établit un équilibre parfait entre prouesses
de jeu et portabilité."
# prompt template 1: translate to English
first_prompt = ChatPromptTemplate.from_template(
"Translate the following review to english:"
"\n\n{Review}"
)
# chain 1: input= Review and output= English_Review
chain_one = LLMChain(llm=llm, prompt=first_prompt,
output_key="English_Review"
)
# prompt template 2: Summarize the English review
second_prompt = ChatPromptTemplate.from_template(
"Can you summarize the following review in 1 sentence:"
"\n\n{English_Review}"
)
# chain 2: input= English_Review and output= summary
chain_two = LLMChain(llm=llm, prompt=second_prompt,
output_key="summary"
)
# prompt template 3: translate to English
third_prompt = ChatPromptTemplate.from_template(
"What language is the following review:\n\n{Review}"
)
# chain 3: input= Review and output= language
chain_three = LLMChain(llm=llm, prompt=third_prompt,
output_key="language"
)
# prompt template 4: follow up message
fourth_prompt = ChatPromptTemplate.from_template(
"Write a follow up response to the following "
"summary in the specified language:"
"\n\nSummary: {summary}\n\nLanguage: {language}"
)
# chain 4: input= summary, language and output= followup_message
chain_four = LLMChain(llm=llm, prompt=fourth_prompt,
output_key="followup_message"
)
An overall Sequential Chain named ‘overall_chain’ is created, incorporating four individual chains ‘chain_one’, ‘chain_two’, ‘chain_three’, and ‘chain_four’. The input variable “Review” is processed through these chains, generating three distinct output variables: “English_Review,” “summary,” and “followup_message.” The ‘overall_chain’ executes the input review through the specified chains and produces these outputs, facilitating a structured, sequential processing flow with detailed outputs.
overall_chain = SequentialChain(
chains=[chain_one, chain_two, chain_three, chain_four],
input_variables=["Review"],
output_variables=["English_Review", "summary","followup_message"],
verbose=True
)
overall_chain(Review)
Output:
The Router Chain is used for complicated tasks. If we have multiple subchains, each of which is specialized for a particular type of input, we could have a router chain that decides which subchain to pass the input to.
It consists of:
This involves directing an input toward a specific chain based on what exactly that input is. When there are several subchains, each tailored for distinct input types, a router chain comes into play. This router chain acts as a decision-maker, determining which specialized subchain to send the input to. Essentially, it enables the seamless routing of inputs to the appropriate subchains, ensuring efficient and precise processing based on the input’s specific characteristics.
from langchain.chains.router import MultiPromptChain
from langchain.chains.router.llm_router import LLMRouterChain,RouterOutputParser
from langchain.prompts import PromptTemplate
llm = OpenAI(temperature=0.7, openai_api_key=OPENAI_API_KEY)
Let’s consider a scenario where we need to direct inputs to specialized chains based on subjects such as Maths, Physics, History, or Computer Science. To accomplish this, we create distinct prompts for each subject: one for physics questions, another for math queries, a third for history inquiries, and a fourth for computer science-related matters. We meticulously design these prompts to cater to the unique needs of each subject area.
physics_template = """You are a very smart physics professor. \
You are great at answering questions about physics in a concise\
and easy to understand manner. \
When you don't know the answer to a question you admit\
that you don't know.
Here is a question:
{input}"""
math_template = """You are a very good mathematician. \
You are great at answering math questions. \
You are so good because you are able to break down \
hard problems into their component parts,
answer the component parts, and then put them together\
to answer the broader question.
Here is a question:
{input}"""
history_template = """You are a very good historian. \
You have an excellent knowledge of and understanding of people,\
events and contexts from a range of historical periods. \
You have the ability to think, reflect, debate, discuss and \
evaluate the past. You have a respect for historical evidence\
and the ability to make use of it to support your explanations \
and judgements.
Here is a question:
{input}"""
Furthermore, detailed information, including names and descriptions, can be attached to these prompt templates. This additional context provides a comprehensive understanding of each template’s purpose. This information is then supplied to the router chain. The router chain then determines which subchain to route to based on the specific subject, ensuring that the appropriate prompt template is utilized for precise and effective responses.
# Defining the prompt templates
prompt_infos = [
{
"name": "physics",
"description": "Good for answering questions about physics",
"prompt_template": physics_template
},
{
"name": "math",
"description": "Good for answering math questions",
"prompt_template": math_template
},
{
"name": "History",
"description": "Good for answering history questions",
"prompt_template": history_template
}
]
Next, our focus shifts to crafting destination chains. These chains are activated by the RouterChain, functioning as individual language model chains, specifically LLM chains. Additionally, a default chain is outlined to handle situations where the router encounters ambiguity and cannot determine the suitable subchain to utilize. This default chain acts as a fallback option, ensuring a response even in cases of indecision.
destination_chains = {}
for p_info in prompt_infos:
name = p_info["name"]
prompt_template = p_info["prompt_template"]
prompt = ChatPromptTemplate.from_template(template=prompt_template)
chain = LLMChain(llm=llm, prompt=prompt)
destination_chains[name] = chain
destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
destinations_str = "\n".join(destinations)
We establish a template guiding the LLM in directing interactions between various chains. This template not only outlines the specific task instructions but also dictates the precise format that the output should adhere to, ensuring a standardized and consistent response mechanism.
MULTI_PROMPT_ROUTER_TEMPLATE = """Given a raw text input to a \
language model select the model prompt best suited for the input. \
You will be given the names of the available prompts and a \
description of what the prompt is best suited for. \
You may also revise the original input if you think that revising\
it will ultimately lead to a better response from the language model.
<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
```json
{{{{
"destination": string \ name of the prompt to use or "DEFAULT"
"next_inputs": string \ a potentially modified version of the original input
}}}}
```
REMEMBER: "destination" MUST be one of the candidate prompt \
names specified below OR it can be "DEFAULT" if the input is not\
well suited for any of the candidate prompts.
REMEMBER: "next_inputs" can just be the original input \
if you don't think any modifications are needed.
<< CANDIDATE PROMPTS >>
{destinations}
<< INPUT >>
{{input}}
<< OUTPUT (remember to include the ```json)>>"""
A preset prompt template is established to accommodate all types of input text. An associated LLMChain, named ‘default_chain,’ is then crafted using the designated Large Language Model and the predefined prompt. This setup enables the Large Language Model to generate responses based on any provided input text.
default_prompt = ChatPromptTemplate.from_template("{input}")
default_chain = LLMChain(llm=llm, prompt=default_prompt)
Moving forward, a flexible router template is developed, encompassing a range of categories such as Physics, Math, History, and Computer Science. From this template, a distinct prompt template tailored for the router is created. Utilizing this customized template, a router chain is established, employing the Large Language Model and the corresponding router prompt.
To improve decision-making capabilities, a router output parser is introduced. This parser assists the router chain in efficiently navigating between subchains. This comprehensive arrangement ensures that inputs are directed precisely to specific subchains, leading to accurate and targeted responses across various destination categories.
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(
destinations=destinations_str
)
router_prompt = PromptTemplate(
template=router_template,
input_variables=["input"],
output_parser=RouterOutputParser(),
)
router_chain = LLMRouterChain.from_llm(llm, router_prompt)
A MultiPromptChain is created, incorporating a router chain to intelligently route inputs to specific destination chains. Additionally, a default chain is included to handle cases where the router chain may encounter ambiguity, ensuring a structured and effective processing flow with verbose logging enabled for detailed insights.
chain = MultiPromptChain(router_chain=router_chain,
destination_chains=destination_chains,
default_chain=default_chain, verbose=True
)
Output
Delve into the real-world uses and achievements of solutions driven by Large Language Models (LLMs), demonstrating their varied influence across sectors. Within customer support, the collaboration between LangChain and LLMs has transformed services through the implementation of smart chatbots. These bots provide immediate, personalized support, efficiently managing a large influx of queries. By reducing wait times, they significantly elevate customer satisfaction levels.
LangChain utilizes the power of Large Language Models (LLMs) to enhance the shopping journey. Developers can create applications that understand product specifics, user likes, and purchasing patterns. By harnessing the capabilities of LLMs, these platforms offer tailored product recommendations, address customer inquiries, and create captivating product descriptions. This leads to increased sales and higher customer engagement levels.
LangChain is revolutionizing patient care and diagnosis through applications powered by Large Language Models (LLMs). With LangChain’s support, develop virtual assistants to understand medical inquiries. These virtual assistants provide accurate information, assess patients based on symptoms, and expedite access to healthcare knowledge. This advancement not only lightens the workload for medical professionals but also enables patients to make well-informed decisions about their health.
LangChain empowers developers to create applications that produce imaginative and contextually relevant content, including blog articles and product descriptions. These applications support content creators by enhancing creativity, streamlining the writing process, and maintaining consistency in tone and style.
The practical implementations highlighted here demonstrate the versatility and impact of solutions driven by Large Language Models (LLMs) across various industries. LangChain’s potential enables developers to create innovative solutions, streamline operations, improve user engagement, and fuel business growth. Success stories abound, ranging from significant decreases in support ticket resolution times to higher customer satisfaction ratings for e-commerce chatbots, showcasing the tangible benefits of LLM-powered applications.
LangChain offers an expansive realm of opportunities for developing applications that consist of Large Language Model capabilities. Whether your focus is on tasks like text completion, language translation, sentiment analysis, text summarization, or named entity recognition, LangChain stands as a versatile solution.
LangChain offers a comprehensive framework for building powerful applications through intelligent chaining techniques. By understanding the intricacies of different chains and their configurations, developers can create tailored solutions for complex tasks. Routing inputs through Router Chains adds a layer of intelligent decision-making, ensuring we direct inputs to the most suitable processing paths. With this knowledge, developers can design innovative applications across industries, streamlining processes, enhancing user experiences, and ultimately revolutionizing the way we interact with language in the digital realm.Hope you like the article , and also get understanding about the langchain llm models.
A1: LangChain is a sophisticated technology that leverages Large Language Models (LLMs) to streamline language processing tasks. It integrates various components such as LLMChains and Router Chains, allowing seamless task routing and efficient processing, leading to the development of intelligent applications.
A2: LLMChain is a fundamental element of LangChain. It operates by employing a PromptTemplate to format user inputs, passing them to an LLM for processing. The optional OutputParser refines the output, ensuring it aligns with the desired format, making LLMChain an essential tool for coherent language generation.
A3: Sequential Chains combine various subchains, allowing the output of one to serve as the input for the next. Simple Sequential Chains handle single inputs and outputs, while more complex Sequential Chains manage multiple inputs and outputs simultaneously, streamlining the flow of information in LangChain applications.
A4: Router Chains are pivotal for intricate tasks with multiple specialized subchains. They determine which subchain to route inputs based on specific characteristics. The Router Chain, along with Destination Chains and a Default Chain, efficiently directs inputs to the most suitable subchain, ensuring precise processing.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.