Hands-on Guide to Building Multi-Agent Chatbots with AutoGen

Santhosh Reddy Dandavolu Last Updated : 12 Nov, 2024
6 min read

Chatbots have evolved from simple question-answer systems to sophisticated, intelligent agents capable of handling complex conversations. As interactions in various fields become more nuanced, the demand for chatbots that can seamlessly manage multiple participants and complex workflows grows. Thanks to frameworks like AutoGen, creating dynamic multi-agent environments is now more accessible. In our previous article, we discussed building a two-agent chatbot using AutoGen. However, there’s a growing need for capabilities beyond the standard two-person chat. Using AutoGen, we can implement conversion patterns like sequential and nested chat. These functionalities create fluid, multi-participant exchanges that can handle complex workflows and dynamic interactions. In this article, we’ll explore how AutoGen facilitates these advanced conversation patterns and discuss their practical applications.

What are Multi-Agent Chatbots?

Multi-agent chatbots are AI systems where several specialized agents work together to complete tasks or manage complex conversations. Each agent focuses on a specific role, such as answering questions, providing recommendations, or analyzing data. This division of expertise allows the chatbot system to respond more accurately and efficiently. By coordinating with multiple agents, the chatbot can deliver more versatile and in-depth responses compared to a single-agent system.

Multi-agent chatbots are ideal for complex environments like customer service, e-commerce, and education. Each agent can take on a different function, such as handling returns, making product suggestions, or assisting with learning materials. When done right, multi-agent chatbots provide a smoother, faster, and more tailored user experience.

What are Conversation Patterns in Autogen?

To coordinate multi-agent conversations, AutoGen has the following conversation patterns that involve more than two agents.

  1. Sequential Chat: This involves a series of conversations between two agents, each linked to the next. A carryover mechanism brings a summary of the prior chat into the context of the following one.
  2. Group Chat: This is a single conversation that includes more than two agents. A key consideration is deciding which agent should respond next, and AutoGen offers multiple ways to organize agent interactions to fit various scenarios.
  3. Nested Chat: Nested chat involves packaging a workflow into a single agent, allowing it to be reused within a larger workflow.

In this blog, we’ll learn how to implement Sequential Chat.

What is Sequential Chat?

In a sequential conversation pattern, an agent starts a two-agent chat, and then the chat summary is carried forward to the next two-agent chat. In this way, the conversation follows a sequence of two-agent chats.

Multi-agent sequential chat in Autogen

As shown in the above image, the conversation starts with a chat between Agent A and Agent B with the given context and message. Then, a summary of this chat is provided to the other two-agent chats as the carryover.

In this image, Agent A is common among all the chats. But, we can also use different agents in each two-agent chat.

Now, why do we need this, instead of a simple two-agent chat? This type of conversation is useful where a task can be broken down into inter-dependent sub-tasks and different agents can better handle each sub-task.

Pre-requisites

Before building AutoGen agents, ensure you have the necessary API keys for LLMs. We will also use Tavily to search the web.

Load the .env file with the API keys needed.

Define the LLM to be used as a config_list

config_list = {

	"config_list": [{"model": "gpt-4o-mini", "temperature": 0.2}]

}

Key Libraries Required

autogen-agentchat – 0.2.37

Implementation

Let’s see how a sequential chat using multiple agents can be built on Autogen. In this example, we will create a stock analysis agentic system. The system will be able to get stock prices of stocks, get recent news related to them, and write an article on the stocks. We will use  Nvidia and Apple as an example, but you can use it for other stocks as well.

Define the Tasks

financial_tasks = [

	"""What are the current stock prices of NVDA and AAPL, and how is the performance over the past month in terms of percentage change?""",

	"""Investigate possible reasons for the stock performance leveraging market news.""",

]

writing_tasks = ["""Develop an engaging blog post using any information provided."""]

Define the Agents

We will define two assistants for each of the financial tasks and another assistant for writing the article.

import autogen

financial_assistant = autogen.AssistantAgent(
	name="Financial_assistant",
	llm_config=config_list,
)
research_assistant = autogen.AssistantAgent(
	name="Researcher",
	llm_config=config_list,
)
writer = autogen.AssistantAgent(
	name="writer",
	llm_config=config_list,
	system_message="""
    	You are a professional writer, known for
    	your insightful and engaging articles.
    	You transform complex concepts into compelling narratives.
    	Reply "TERMINATE" in the end when everything is done.
    	""",
)

Since getting the stock data and news needs a web search, we will define an agent capable of code execution.

user_proxy_auto = autogen.UserProxyAgent(
	name="User_Proxy_Auto",
	human_input_mode="ALWAYS",
	is_termination_msg=lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"),
	code_execution_config={
    	"work_dir": "tasks",
    	"use_docker": False,
	})

We will use human_input_mode as “ALWAYS”, so that we can check the code generated and ask the agent to make any changes if necessary.

The generated code is saved in the ‘tasks’ folder.

We can also use Docker to execute the code for safety.

Financial_assistant and research_assistant will generate the code necessary and send it to user_proxy_auto for execution.

Since ‘writer’ doesn’t need to generate any code, we will define another user agent to chat with ‘writer’.

user_proxy = autogen.UserProxyAgent(
	name="User_Proxy",
	human_input_mode="ALWAYS",
	is_termination_msg=lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"),
	code_execution_config=False)

Here also, we will use human_input_mode as ‘ALWAYS’ to provide any feedback to the agent.

Sample Conversation

Now, we can start the conversation.

chat_results = autogen.initiate_chats(
	[
    	{
        	"sender": user_proxy_auto,
        	"recipient": financial_assistant,
        	"message": financial_tasks[0],
        	"clear_history": True,
        	"silent": False,
        	"summary_method": "last_msg",
    	},
    	{
        	"sender": user_proxy_auto,
        	"recipient": research_assistant,
        	"message": financial_tasks[1],
        	"summary_method": "reflection_with_llm",
    	},
    	{
        	"sender": user_proxy,
        	"recipient": writer,
        	"message": writing_tasks[0]
    	},
	])

As defined above, the first two-agent chat is between user_proxy_auto and financial_assistant, the second chat is between user_proxy_auto and research_assistant, and the third is between user_proxy and writer.

The initial output will be as shown in this image

initial input

If you are satisfied with the results by each of the agents type exit in the human input, else give useful feedback to the agents.

Chat Results

Now let’s get the chat_results. We can access the results of each agent.

len(chat_results)
>> 3  # for each agent

We see that we have 3 results for each of the agents. To get the output of chat for a particular agent we can use appropriate indexing. Here is the response we got from the last agent, which is a writer agent.

Multi-agent sequential conversation in Autogen

As you can see above, our writer agent has communicated with the Financial Assistant and Research Assistant agents, to give us a comprehensive analysis of the performance of NVIDIA and Apple stocks.

Conclusion

AutoGen’s conversation patterns, like sequential, allow us to build complex, multi-agent interactions beyond standard two-person chats. These patterns enable seamless task coordination, breaking down complex workflows into manageable steps handled by specialized agents. With AutoGen, applications across finance, content generation, and customer support can benefit from enhanced collaboration among agents. This enables us to create adaptive, efficient conversational solutions tailored to specific needs.

If you want to learn more about AI Agents, checkout our exclusive Agentic AI Pioneer Program!

Frequently Asked Questions

Q1. What are multi-agent chatbots, and how do they work?

A. Multi-agent chatbots use multiple specialized agents, each focused on a specific task like answering questions or giving recommendations. This structure allows the chatbot to handle complex conversations by dividing tasks.

Q2. What conversation patterns are supported by AutoGen, and why are they important?

A. AutoGen supports patterns like sequential, group, and nested chat. These allow chatbots to coordinate tasks among multiple agents, which is essential for complex interactions in customer service, content creation, etc.

Q3. How does the Sequential Chat pattern work in AutoGen?

A. Sequential Chat links a series of two-agent conversations by carrying over a summary to the next. It’s ideal for tasks that can be broken into dependent steps managed by different agents.

Q4. What are some practical applications of multi-agent conversation patterns in AutoGen?

A. Multi-agent patterns in AutoGen are useful for industries like customer support, finance, and e-commerce, where chatbots manage complex, adaptive tasks across specialized agents.

I am working as an Associate Data Scientist at Analytics Vidhya, a platform dedicated to building the Data Science ecosystem. My interests lie in the fields of Natural Language Processing (NLP), Deep Learning, and AI Agents.

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details