Large Language Model agents are powerful tools for automating tasks like search, content generation, and quality review. However, a single agent often can’t do everything efficiently, especially when you need to integrate external resources (like web searches) and multiple specialized steps (e.g., drafting vs. reviewing). Multi-agent workflows allow you to split these tasks among different agents, each with its own tools, constraints, and responsibilities. In this article, we’ll look at how to build a three-agent system—ResearchAgent, WriteAgent, and ReviewAgent—where each agent handles a specific part of creating a concise historical report on the internet. We’ll also ensure the system won’t get stuck in a search loop, which can waste time and credits.
This article was published as a part of the Data Science Blogathon.
We’ll use OpenAI(model=”gpt-4o”) from llama-index. You can swap this out with another LLM if you prefer, but GPT-4 is usually a strong choice for multi-step reasoning tasks.
###############################################################################
# 1. INSTALLATION
###############################################################################
# Make sure you have the following installed:
# pip install llama-index langchain duckduckgo-search
###############################################################################
# 2. IMPORTS
###############################################################################
%pip install llama-index langchain duckduckgo-search
from llama_index.llms.openai import OpenAI
# For DuckDuckGo search via LangChain
from langchain.utilities import DuckDuckGoSearchAPIWrapper
# llama-index workflow classes
from llama_index.core.workflow import Context
from llama_index.core.agent.workflow import (
FunctionAgent,
AgentWorkflow,
AgentInput,
AgentOutput,
ToolCall,
ToolCallResult,
AgentStream
)
import asyncio
###############################################################################
# 3. CREATE LLM
###############################################################################
# Replace "sk-..." with your actual OpenAI API key
llm = OpenAI(model="gpt-4", api_key="OPENAI_API_KEY")
Tools are functions that agents can call to perform actions outside of their own language modeling. Typical tools include:
In our example, the key tool is DuckDuckGoSearch, which uses LangChain’s DuckDuckGoSearchAPIWrapper under the hood. We also have helper tools to record notes, write a report, and review it.
###############################################################################
# 4. DEFINE DUCKDUCKGO SEARCH TOOL WITH SAFEGUARDS
###############################################################################
# We wrap LangChain's DuckDuckGoSearchAPIWrapper with our own logic
# to prevent repeated or excessive searches.
duckduckgo = DuckDuckGoSearchAPIWrapper()
MAX_SEARCH_CALLS = 2
search_call_count = 0
past_queries = set()
async def safe_duckduckgo_search(query: str) -> str:
"""
A DuckDuckGo-based search function that:
1) Prevents more than MAX_SEARCH_CALLS total searches.
2) Skips duplicate queries.
"""
global search_call_count, past_queries
# Check for duplicate queries
if query in past_queries:
return f"Already searched for '{query}'. Avoiding duplicate search."
# Check if we've reached the max search calls
if search_call_count >= MAX_SEARCH_CALLS:
return "Search limit reached, no more searches allowed."
# Otherwise, perform the search
search_call_count += 1
past_queries.add(query)
# DuckDuckGoSearchAPIWrapper.run(...) is synchronous, but we have an async signature
result = duckduckgo.run(query)
return str(result)
###############################################################################
# 5. OTHER TOOL FUNCTIONS: record_notes, write_report, review_report
###############################################################################
async def record_notes(ctx: Context, notes: str, notes_title: str) -> str:
"""Store research notes under a given title in the shared context."""
current_state = await ctx.get("state")
if "research_notes" not in current_state:
current_state["research_notes"] = {}
current_state["research_notes"][notes_title] = notes
await ctx.set("state", current_state)
return "Notes recorded."
async def write_report(ctx: Context, report_content: str) -> str:
"""Write a report in markdown, storing it in the shared context."""
current_state = await ctx.get("state")
current_state["report_content"] = report_content
await ctx.set("state", current_state)
return "Report written."
async def review_report(ctx: Context, review: str) -> str:
"""Review the report and store feedback in the shared context."""
current_state = await ctx.get("state")
current_state["review"] = review
await ctx.set("state", current_state)
return "Report reviewed."
Each agent is an instance of FunctionAgent. Key fields include:
###############################################################################
# 6. DEFINE AGENTS
###############################################################################
# We have three agents with distinct responsibilities:
# 1. ResearchAgent - uses DuckDuckGo to gather info (max 2 searches).
# 2. WriteAgent - composes the final report.
# 3. ReviewAgent - reviews the final report.
research_agent = FunctionAgent(
name="ResearchAgent",
description=(
"A research agent that searches the web using DuckDuckGo. "
"It must not exceed 2 searches total, and must avoid repeating the same query. "
"Once sufficient information is collected, it should hand off to the WriteAgent."
),
system_prompt=(
"You are the ResearchAgent. Your goal is to gather sufficient information on the topic. "
"Only perform at most 2 distinct searches. If you have enough info or have reached 2 searches, "
"handoff to the next agent. Avoid infinite loops!"
),
llm=llm,
tools=[
safe_duckduckgo_search, # Our DuckDuckGo-based search function
record_notes
],
can_handoff_to=["WriteAgent"]
)
write_agent = FunctionAgent(
name="WriteAgent",
description=(
"Writes a markdown report based on the research notes. "
"Then hands off to the ReviewAgent for feedback."
),
system_prompt=(
"You are the WriteAgent. Draft a structured markdown report based on the notes. "
"After writing, hand off to the ReviewAgent."
),
llm=llm,
tools=[write_report],
can_handoff_to=["ReviewAgent", "ResearchAgent"]
)
review_agent = FunctionAgent(
name="ReviewAgent",
description=(
"Reviews the final report for correctness. Approves or requests changes."
),
system_prompt=(
"You are the ReviewAgent. Read the report, provide feedback, and either approve "
"or request revisions. If revisions are needed, handoff to WriteAgent."
),
llm=llm,
tools=[review_report],
can_handoff_to=["WriteAgent"]
)
An AgentWorkflow coordinates how messages and state move between agents. When the user initiates a request (e.g., “Write me a concise report on the history of the internet…”), the workflow:
The workflow ends once the content is approved and no further changes are requested.
In this step, we define the agent workflow, which includes research, writing, and reviewing agents. The root_agent
is set to the research_agent
, meaning the process starts with gathering research. The initial state contains placeholders for research notes, report content, and review status.
agent_workflow = AgentWorkflow(
agents=[research_agent, write_agent, review_agent],
root_agent=research_agent.name, # Start with the ResearchAgent
initial_state={
"research_notes": {},
"report_content": "Not written yet.",
"review": "Review required.",
},
)
The workflow is executed using a user request, which specifies the topic and key points to cover in the report. The request in this example asks for a concise report on the history of the internet, including its origins, the development of the World Wide Web, and its modern evolution. The workflow processes this request by coordinating the agents.
# Example user request: "Write me a report on the history of the internet..."
handler = agent_workflow.run(
user_msg=(
"Write me a concise report on the history of the internet. "
"Include its origins, the development of the World Wide Web, and its 21st-century evolution."
)
)
To monitor the workflow’s execution, we stream events and print details about agent activities. This allows us to track which agent is currently working, view intermediate outputs, and inspect tool calls made by the agents. Debugging information such as tool usage and responses is displayed for better visibility.
current_agent = None
async for event in handler.stream_events():
if hasattr(event, "current_agent_name") and event.current_agent_name != current_agent:
current_agent = event.current_agent_name
print(f"\n{'='*50}")
print(f"🤖 Agent: {current_agent}")
print(f"{'='*50}\n")
# Print outputs or tool calls
if isinstance(event, AgentOutput):
if event.response.content:
print("📤 Output:", event.response.content)
if event.tool_calls:
print("🛠️ Planning to use tools:", [call.tool_name for call in event.tool_calls])
elif isinstance(event, ToolCall):
print(f"🔨 Calling Tool: {event.tool_name}")
print(f" With arguments: {event.tool_kwargs}")
elif isinstance(event, ToolCallResult):
print(f"🔧 Tool Result ({event.tool_name}):")
print(f" Arguments: {event.tool_kwargs}")
print(f" Output: {event.tool_output}")
Once the workflow completes, we extract the final state, which contains the generated report. The report content is printed, followed by any review feedback from the review agent. This ensures the output is complete and can be further refined if necessary.
final_state = await handler.ctx.get("state")
print("\n\n=============================")
print("FINAL REPORT:\n")
print(final_state["report_content"])
print("=============================\n")
# Review feedback (if any)
if "review" in final_state:
print("Review Feedback:", final_state["review"])
When using a web search tool, it’s possible for the LLM to get “confused” and repeatedly call the search function. This can lead to unnecessary costs or time consumption. To prevent that, we use two mechanisms:
If either condition is met (the maximum searches or a duplicate query), our safe_duckduckgo_search function returns a canned message instead of performing a new search.
ResearchAgent
WriteAgent
ReviewAgent
Workflow Ends
The final output is stored in final_state[“report_content”].
By splitting your workflow into distinct agents for search, writing, and review, you can create a powerful, modular system that:
The DuckDuckGo integration using LangChain offers a plug-and-play web search solution for Multi-Agent Workflow without requiring specialized API keys or credentials. Combined with built-in safeguards (search call limits, duplicate detection), this system is robust, efficient, and suitable for a wide range of research and content-generation tasks.
A. Splitting responsibilities across agents (research, writing, reviewing) ensures each step is clearly defined and easier to manage. It also reduces confusion in the model’s decision-making and fosters more accurate, structured outputs.
A. In the code, we use a global counter (search_call_count) and a constant (MAX_SEARCH_CALLS = 2). Whenever the search agent calls safe_duckduckgo_search, it checks whether the counter has reached the limit. If so, it returns a message instead of performing another search.
A. We maintain a Python set called past_queries to detect repeated queries. If the query is already in that set, the tool will skip performing the actual search and return a short message, preventing duplicate queries from running.
A. Absolutely. You can edit each agent’s system_prompt to tailor instructions to your desired domain or writing style. For instance, you could instruct the WriteAgent to produce a bullet-point list, a narrative essay, or a technical summary.
A. You can swap out OpenAI(model=”gpt-4″) for another model supported by llama-index (e.g., GPT-3.5, or even a local model). The architecture remains the same, though some models may produce different-quality outputs.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.