Artificial intelligence (AI) is a rapidly developing field. As a result, language models have advanced to a point where AI agents are able to perform complex tasks and make complex decisions. However, as these agents’ skills have grown, the infrastructure that supports them has found it difficult to keep up. Presenting LangGraph, a revolutionary library that aims to revolutionize AI agent building and runtime execution.
The agent executor class in the Langchain framework was the main tool for building and executing AI agents before LangGraph. This class relied on a straightforward but powerful idea: it used an agent in a loop, asking it to make decisions, carry them out, and log observations. This technique had its uses, but its adaptability and customization possibilities were intrinsically restricted.
Although functional, the agent executor class limited developers’ ability to design more dynamic and flexible agent runtimes by imposing a particular pattern of tool calling and error handling. As AI agents became more sophisticated, the need for a more adaptable architecture emerged.
In response to these constraints, LangGraph presents a novel paradigm for agent building and runtime construction. Large Language Models (LLMs) are the foundation for designing sophisticated AI agents, and LangGraph, built on top of Langchain, is intended to make the process of creating cyclic graphs easier.
LangGraph views agent workflows as cyclic graph topologies at their foundation. This method enables more variable and nuanced behaviors from agents, surpassing its predecessors’ linear execution model. Using graph theory, LangGraph provides new avenues for developing intricate, networked agent systems.
State Management: As agents became more sophisticated, tracking and updating state data as the agent was being executed became necessary. LangGraph’s stateful graph methodology satisfies this need.
The functionality of LangGraph is based on several essential elements:
The following diagram can be used to illustrate the working:
As shown in the image, the nodes include LLM, tools, etc. which are represented by circles or rhombus which. Flow of information between various nodes is represented by arrows.
The popular NetworkX library served as the model for the library’s interface, which makes it user-friendly for developers with prior experience with graph-based programming.
LangGraph’s approach to agent runtime differs significantly from that of its forerunners. Instead of a basic loop, it enables the construction of intricate, networked systems of nodes and edges. With this structure, developers can design more complex decision-making procedures and action sequences.
Now, let us build an agent using LangGraph to understand them better. First, we will implement tool calling, then using a pre-built agent, and then building an agent ourselves in LangGraph.
Create an OpenAI API key to access the LLMs and Weather API key (get here) to access the weather information. Store these keys in a ‘.env’ file:
Load and Import the keys as follows:
import os
from dotenv import load_dotenv
load_dotenv('/.env')
WEATHER_API_KEY = os.environ['WEATHER_API_KEY']
# Import the required libraries and methods
import json
import requests
import rich
from typing import List, Literal
from IPython.display import Image, display
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
We will define two tools. One is to to get weather information when the query is specific to weather and another is to search the internet when the LLM doesn’t know the answer to the given query:
@tool
def get_weather(query: str) -> list:
"""Search weatherapi to get the current weather."""
base_url = "http://api.weatherapi.com/v1/current.json"
complete_url = f"{base_url}?key={WEATHER_API_KEY}&q={query}"
response = requests.get(complete_url)
data = response.json()
if data.get("location"):
return data
else:
return "Weather Data Not Found"
@tool
def search_web(query: str) -> list:
"""Search the web for a query."""
tavily_search = TavilySearchResults(max_results=2, search_depth='advanced', max_tokens=1000)
results = tavily_search.invoke(query)
return results
To make these tools available for the LLM, we can bind these tools to the LLM as follows:
gpt = ChatOpenAI(model="gpt-4o-mini", temperature=0)
tools = [search_web, get_weather]
gpt_with_tools = gpt.bind_tools(tools)
Now, let’s invoke the LLM to with a prompt to see the results:
prompt = """
Given only the tools at your disposal, mention tool calls for the following tasks:
Do not change the query given for any search tasks
1. What is the current weather in Greenland today
2. Can you tell me about Greenland and its capital
3. Why is the sky blue?
"""
results = gpt_with_tools.invoke(prompt)
results.tool_calls
The results will be the following:
As we see, when we ask about the weather, get_weather tool is called.
The GPT model doesn’t know the who won ICC worldcup in 2024, as it is updated with information upto October 2023 only. So, when we ask about this query, it is calling search_web tool.
LangGraph has pre-built react (reason and act) agent. Let’s see how it works:
from langgraph.prebuilt import create_react_agent
# system prompt is used to inform the tools available to when to use each
system_prompt = """Act as a helpful assistant.
Use the tools at your disposal to perform tasks as needed.
- get_weather: whenever user asks get the weather of a place.
- search_web: whenever user asks for information on current events or if you don't know the answer.
Use the tools only if you don't know the answer.
"""
# we can initialize the agent using the gpt model, tools, and system prompt.
agent = create_react_agent(model=gpt, tools=tools, state_modifier=system_prompt)
# We will discuss its working in the next section. Let’s query the agent to see the result.
def print_stream(stream):
for s in stream:
message = s["messages"][-1]
if isinstance(message, tuple):
print(message)
else:
message.pretty_print()
inputs = {"messages": [("user", "who won the ICC worldcup in 2024?")]}
print_stream(agent.stream(inputs, stream_mode="values"))
As we can see from the output, the LLM called the search_web tool for the given query, and tool found a URL and returned content back to the LLM which contains result to the query. Then, LLM returned the answer.
Now we build an agent using langGraph:
# import the required methods
from langgraph.prebuilt import ToolNode
from langgraph.graph import StateGraph, MessagesState, START, END
# define a tool_node with the available tools
tools = [search_web, get_weather]
tool_node = ToolNode(tools)
# define functions to call the LLM or the tools
def call_model(state: MessagesState):
messages = state["messages"]
response = gpt_with_tools.invoke(messages)
return {"messages": [response]}
def call_tools(state: MessagesState) -> Literal["tools", END]:
messages = state["messages"]
last_message = messages[-1]
if last_message.tool_calls:
return "tools"
return END
The call_model function takes “messages” from state as input. The “messages” can include query, prompt, or content form the tools. It returns the response.
The call_tools function also takes state messages as input. If the last message contains tool calls, as we have seen in tool_calling output, then it returns “tools” node. Otherwise it ends.
Now let’s build nodes and edges:
# initialize the workflow from StateGraph
workflow = StateGraph(MessagesState)
# add a node named ‘LLM’, with call_model function. This node uses an LLM to make decisions based on the input given
workflow.add_node("LLM", call_model)
# Our workflow starts with the ‘LLM’ node
workflow.add_edge(START, "LLM")
# Add a ‘tools’ node
workflow.add_node("tools", tool_node)
# depending on the output of the LLM, it can go ‘tools’ node or end. So, we add a conditional edge from LLM to call_tools function
workflow.add_conditional_edges("LLM", call_tools)
# ‘tools’ node sends the information back to the LLM
workflow.add_edge("tools", "LLM")
Now let’s compile the workflow and display it.
agent = workflow.compile()
display(Image(agent.get_graph().draw_mermaid_png()))
As shown in the image, we start with the LLM. The LLM either calls the tools or ends based on the available information to it. If it calls any tool, the tool executes and send the result back to the LLM. And the LLM again decides to call the tool or end.
Now let’s query the agent and see the result:
for chunk in agent.stream(
{"messages": [("user", "Will it rain in Bengaluru today?")]},
stream_mode="values",):
chunk["messages"][-1].pretty_print()
Output:
As we have asked about the weather, get_weather tool is called and it returned various weather related values. Based on those values, the LLM returned that it is unlikely to rain.
In this way, we can add different kinds of tools to the LLM so that we can get our queries answered even if LLM alone can’t answer. Thus LLM agents will be far more useful in many scanarios.
LangGraph offers a powerful toolset for building complex AI systems. It provides a framework for creating agentic systems that can reason, make decisions, and interact with multiple data sources. Key features include:
LangGraph has many different real-world applications. It makes more complex decision-making processes possible in single-agent contexts by letting actors review and improve their arguments before acting. This is especially helpful in difficult problem-solving situations where linear execution might not be sufficient.
LangGraph excels in multi-agent systems. It permits the development of complicated agent ecosystems, wherein many specialized agents can work together to accomplish intricate tasks. LangGraph controls each agent’s interactions and information sharing through its graph structure, which can be developed with specific capabilities.
For instance, a system with distinct agents for comprehending the initial query, retrieving knowledge, generating responses, and ensuring quality assurance may be developed in a customer service setting. LangGraph would oversee information flow management, enabling smooth and efficient consumer engagement among these workers.
Frameworks such as LangGraph are becoming increasingly important as AI develops. LangGraph is making the next generation of AI applications possible by offering a versatile and strong framework for developing and overseeing AI agents.
The capacity to design increasingly intricate, flexible, and networked agent systems makes new applications possible, from personal assistants to scientific research tools. As developers become more comfortable with LangGraph’s features, we may anticipate seeing more advanced AI agents that can do ever more complex jobs.
To sum up, LangGraph is a major advancement in the development of AI agents. It enables developers to push the limits of what’s possible with AI agents by eliminating the shortcomings of earlier systems and offering a flexible, graph-based framework for agent construction and execution. LangGraph is positioned to influence the direction of artificial intelligence significantly in the future.
Unlock the potential of AI with LangGraph today! Start building your advanced AI agents and elevate your projects—explore our Generative AI Pinnacle Program now!
Also read: OpenAI’s AI Agents to Automate Complex Tasks
Ans. LangGraph addresses the limitations of previous AI agent development frameworks by providing more flexibility, better state management, and support for cyclic execution and multi-agent systems.
Ans. Unlike the previous agent executor’s linear execution model, LangGraph allows for the creation of complex, networked agent systems with more dynamic and flexible agent runtimes.
Ans. Yes, LangGraph excels in multi-agent systems, allowing developers to create complex agent ecosystems where multiple specialized agents can collaborate on complex tasks.
Ans. LangGraph can be used in various scenarios, from enhancing single-agent decision-making processes to creating complex multi-agent systems for tasks like customer service, where different agents handle different aspects of the interaction.
Ans. While LangGraph utilizes graph concepts, its interface is modeled after the popular NetworkX library, making it user-friendly for developers with prior experience in graph-based programming. However, some understanding of graph concepts would be beneficial.