Strategic Team Building with AutoGen AI

Divya K Last Updated : 29 Feb, 2024
13 min read

Introduction

In a world where the digital frontier knows no bounds, AutoGen emerges as the architect of a transformative paradigm. Imagine having your a personalized AI workforce, each skilled in different domains, collaborate seamlessly, communicate effortlessly, and work tirelessly to tackle complex tasks. This is the essence of AutoGen, a pioneering multi-agent conversation framework that empowers you to create your personalized Strategic AI Team Building. In this article, we unveil the magic of AutoGen, exploring how it empowers you to assemble your own digital dream team and achieve the extraordinary. Welcome to a future where the boundaries between humans and machines fade, and collaboration becomes limitless.

Autogen | AI Team Building

Learning Objectives

Before we dive into the details, let’s outline the key learning objectives of this article:

  • Gain a comprehensive understanding of AutoGen as a multi-agent conversation framework.
  • Learn how agents communicate and collaborate autonomously in the multi-agent conversation framework.
  • Learn the critical role of config_list in AutoGen’s operation. Understand best practices for securing API keys and managing configurations for efficient agent performance.
  • Explore various conversation styles, from fully autonomous to human-involved interactions. Learn about static and dynamic conversation patterns supported by AutoGen.
  • Discover how to utilize AutoGen for tuning LLM based on validation data, evaluation functions, and optimization metrics.
  • Explore examples such as building a collaborative content creation team and language translation with cultural context to understand how AutoGen can be applied in different scenarios.

This article was published as a part of the Data Science Blogathon.

What is AutoGen?

AutoGen is a unified multi-agent conversation framework that acts as a high-level abstraction for using foundation models. It brings together capable, customizable, and conversable agents that integrate LLMs, tools, and human participants via automated chat. Essentially, it enables agents to communicate and work together autonomously, effectively streamlining complex tasks and automating workflows.

Why is AutoGen Important?

AutoGen addresses the need for efficient and flexible multi-agent communication with Strategic AI Team Building. Its importance lies in its ability to:

  • Simplify orchestration, automation, and optimization of complex LLM workflows.
  • Maximize the performance of LLM models while overcoming their limitations.
  • Enable the development of next-generation LLM applications based on multi-agent conversations with minimal effort.

Setting Up Your Development Environment

Create a Virtual Environment

Virtual environments is a good practice to isolate project-specific dependencies and avoid conflicts with system-wide packages. Here’s how to set up a Python environment:

Option1: Venv

python -m venv env_name
  • Activate the Virtual Environment:
\env_name\Scripts\activate
  • On macOS and Linux:
source env_name/bin/activate

The following command will deactivate the current venv environment:

deactivate

Option 2 : Conda

conda create -n pyautogen python=3.10
conda activate pyautogen

The following command will deactivate the current conda environment:

conda deactivate

Python: Autogen requires Python version ≥3.8

Install AutoGen:

pip install pyautogen

Set your API

Efficiently managing API configurations is critical when working with multiple models and API versions. OpenAI provides utility functions to assist users in this process. It’s imperative to safeguard your API keys and sensitive data, storing them securely in .txt or .env files or as environment variables for local development, avoiding any inadvertent exposure.

Steps

1. Obtain API keys from OpenAI, and optionally from Azure OpenAI or other providers.
2. Securely store these keys using either:

  • Environment Variables: Use export OPENAI_API_KEY=’your-key’ in your shell.
  • Text File: Save the key in a key_openai.txt file.
  • Env File: Store the key in a .env file, e.g., OPENAI_API_KEY= sk-

What is a Config_list?

The config_list plays a pivotal role in AutoGen’s operation, enabling intelligent assistants to dynamically select the appropriate model configuration. It handles essential details such as API keys, endpoints, and versions, ensuring the smooth and reliable functioning of assistants across various tasks.

Steps:

1. Store configurations in an environment variable named OAI_CONFIG_LIST as a valid JSON string.

2. Alternatively, save configurations in a local JSON file named OAI_CONFIG_LIST.json

3. Add OAI_CONFIG_LIST to your .gitignore file on your local repository.

assistant = AssistantAgent(
    name="assistant",
    llm_config={
        "timeout": 400,
        "seed": 42,
        "config_list": config_list,
        "temperature": 0,
    },
)

Ways to Generate Config_list

You can generate a config_list using various methods, depending on your use case:

  • get_config_list: Generates configurations for API calls primarily from provided API keys.
  • config_list_openai_aoai: Creates a list of configurations using both Azure OpenAI and Strategic AI Team Building endpoints, sourcing API keys from environment variables or local files.
  • config_list_from_json: Loads configurations from a JSON structure, allowing you to filter configurations based on specific criteria.
  • config_list_from_models: Creates configurations based on a provided list of models, useful for targeting specific models without manual configuration.
  • config_list_from_dotenv: Constructs a configuration list from a .env file, simplifying the management of multiple API configurations and keys from a single file.

Now, let’s look at two essential methods for generating a config_list:

Get_config_list

Used to generate configurations for API calls.

api_keys = ["YOUR_OPENAI_API_KEY"]
base_urls = None  
    api_keys,
    base_urls=base_urls,
    api_type=api_type,
    api_version=api_version
)

print(config_list)

Config_list_from_json

This method loads configurations from an environment variable or a JSON file. It provides flexibility by allowing users to filter configurations based on certain criteria.

Your JSON structure should look something like this:

# OAI_CONFIG_LIST file example
[
    {
        "model": "gpt-4",
        "api_key": "YOUR_OPENAI_API_KEY"
    },
    {
        "model": "gpt-3.5-turbo",
        "api_key": "YOUR_OPENAI_API_KEY",
        "api_version": "2023-03-01-preview"
    }
]
config_list= autogen.config_list_from_json(
    env_or_file="OAI_CONFIG_LIST",
    # or OAI_CONFIG_LIST.json if file extension is addedfilter_dict={"model": {
            "gpt-4",
            "gpt-3.5-turbo",
        }
    }
)

Key Features

  • AutoGen simplifies the development of advanced LLM applications that involve multi-agent conversations, minimizing the need for extensive manual effort. It streamlines the orchestration, automation, and optimization of complex LLM workflows, enhancing overall performance and addressing inherent limitations.
  • It facilitates diverse conversation patterns for intricate workflows, empowering developers to create customizable and interactive agents. With AutoGen, a wide spectrum of conversation patterns can be built, considering factors like conversation autonomy, agent count, and conversation topology.
  • The platform offers a range of operational systems with varying complexities, demonstrating its versatility across multiple applications from diverse domains. AutoGen’s capability to support a wide array of conversation patterns is exemplified through these diverse implementations.
  • AutoGen provides enhanced LLM inference. It offers utilities like   API unification and caching, along with advanced usage patterns like error handling, multi-config inference, and context programming, thereby improving overall inference capabilities.

Multi-Agent Conversation Framework

AutoGen offers a unified multi-agent conversation framework as a high-level abstraction of using foundation models. Imagine you have a group of virtual assistants who can talk to each other and work together to complete complex tasks, like organizing a big event or managing a complicated project. AutoGen helps them do this efficiently and effectively.

Agents

Strategic AI Team Building, AutoGen agents are a part of the AutoGen framework. These agents are designed to solve tasks through inter-agent conversations. Here are some notable features of AutoGen agents:

  • Conversable: Agents in AutoGen are conversable, which means Just like people talk to each other, these digital helpers can send and receive messages to have discussions. This helps them work together.
  • Customizable: Agents in AutoGen can be customized to integrate LLMs, humans, tools, or a combination of them.

Build-in Agents in Autogen

Build in agents in autogen

Strategic AI Team Building we’ve created a special class called Conversable Agent that helps computer programs talk to each other to work on tasks together. These agents can send messages and perform different actions based on the messages they get.

There are two main types of agents:

  • Assistant Agent: This agent is like a helpful Strategic AI Team Building assistant. It can write Python code for you to run when you give it a task. It uses a smart program called LLM (like GPT-4) to write the code. It can also check the results and suggest fixes. You can change how it behaves by giving it new instructions. You can also tweak how LLM works with it using the llm_config.
  • User-Proxy Agent: This agent acts like a go-between for people. It can ask humans for help or execute code when needed. It can even use LLM to generate responses when it’s not executing code. You can control code execution and LLM usage with settings like code_execution_config and llm_config.

These agents can talk to each other without human help, but humans can step in if needed. You can also add more features to them using the register_reply() method.

Use Case: AutoGen’s Multi-agent Framework for Answering User

In the code snippet below, we define an AssistantAgent called “Agent1” to function as an assistant to help with general questions, “Agent2” to function as an assistant to help with technical a UserProxyAgent named “user_proxy” to act as a mediator for the human user. We will use these agents later to accomplish a specific task.

import autogen
# Import the openai api key
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")

# Create two agents
agent1 = autogen.AssistantAgent(
    name="Agent 1",
    llm_config={
        "seed": 42,
        "config_list": config_list,
        "temperature": 0.7,
        "request_timeout": 1200,
    },
    system_message="Agent 1. I can help with general questions.",
)

agent2 = autogen.AssistantAgent(
    name="Agent 2",
    llm_config={
        "seed": 42,
        "config_list": config_list,
        "temperature": 0.7,
        "request_timeout": 1200,
    },
    system_message="Agent 2. I'm here to assist with technical questions.",
)

# Create a User Proxy agent
user_proxy = autogen.UserProxyAgent(
    name="User Proxy",
    human_input_mode="ALWAYS",
    code_execution_config=False,
)

# Create a chat group for the conversation
chat_group = autogen.GroupChat(
    agents=[agent1, agent2, user_proxy],
    messages=[],
    max_round=10,
)

# Create a group chat manager
chat_manager = autogen.GroupChatManager(groupchat=chat_group, llm_config=agent_config)

# Initiate the conversation with a user question
user_proxy.initiate_chat(
    chat_manager,
    message="Can you explain the concept of machine learning?"
)

In this simple example, two agents, “Agent 1” and “Agent 2,” work together to provide answers to a user’s questions. The “User Proxy” agent facilitates communication between the user and the other agents. This demonstrates a basic use case of AutoGen’s multi-agent conversation framework for answering user queries.

Supporting Diverse Conversation Patterns

AutoGen supports a variety of conversation styles, accommodating both fully automated and human-involved interactions.

Diverse Conversation Styles

  1. Autonomous Conversations: After an initial setup, you can have fully automated conversations where the agents work independently.
  2. Human-In-The-Loop: AutoGen can be configured to involve humans in the conversation process. For example, you can set the human_input_mode to “ALWAYS” to ensure human input is included when needed, which is valuable in many applications.

Static vs Dynamic Conversations

AutoGen allows for both static and dynamic conversation patterns.

  1. Static Conversations: These follow predefined conversation structures and are consistent regardless of the input.
  2. Dynamic Conversations: Dynamic conversations adapt to the actual flow of the conversation, making them suitable for complex applications where interaction patterns cannot be predetermined.

Approaches for Dynamic Conversations

AutoGen offers two methods for achieving dynamic conversations:

Registered Auto-Reply

You can set up auto-reply functions, allowing agents to decide who should speak next based on the current message and context. This approach is demonstrated in a group chat example, where LLM determines the next speaker in the chat.

let’s explore a new use case for “Registered Auto-Reply” in the context of a dynamic group chat scenario where an LLM (Language Model) decides who the next speaker should be based on the content and context of the conversation.

Use Case: Collaborative Content Creation

Collaborative content creation | Autogen | AI Team Building

In this use case, we have a dynamic group chat involving three agents: a UserProxyAgent representing a user, a Writer Agent, and an Editor Agent. The goal is to collaboratively create written content. The Registered Auto-Reply function allows the LLM to decide when to switch roles between writers and editors based on the content’s quality and completion.

# Import the openai api key
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")

# Create agents with LLM configurations
llm_config = {"config_list": config_list, "seed": 42}
user_proxy = autogen.UserProxyAgent(
    name="User_proxy",
    system_message="A content creator.",
    code_execution_config={"last_n_messages": 2, "work_dir": "content_creation"},
    human_input_mode="TERMINATE"
)

Construct Agents

writer = autogen.AssistantAgent(
    name="Writer",
    llm_config=llm_config,
)

editor = autogen.AssistantAgent(
    name="Editor",
    system_message="An editor for written content.",
    llm_config=llm_config,
)

groupchat = autogen.GroupChat(agents=[user_proxy, writer, editor], messages=[], max_round=10)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

Start Chat

# Initiate the chat with the user as the content creator
user_proxy.initiate_chat(
  manager, 
  message="Write a 
short article about artificial
intelligence in healthcare."
)

# Type 'exit' to terminate the chat
"
"

In this scenario, the user, represented by the UserProxyAgent, initiates a conversation to create a written article. The WriterAgent initially takes the role of drafting the content. The EditorAgent, on the other hand, is available to provide edits and suggestions. The key here is the Registered Auto-Reply function, which allows the LLM to assess the quality of the written content. When it recognizes that the content is ready for editing, it can automatically switch to the EditorAgent, who will then refine and improve the article.

This dynamic conversation ensures that the writing process is collaborative and efficient, with the LLM making the decision on when to involve the editor based on the quality of the written content.

LLM-Based Function Call

LLM (e.g., GPT-4) can decide whether to call specific functions based on the ongoing conversation. These functions can involve additional agents, enabling dynamic multi-agent conversations.

Use Case: Language Translation and Cultural Context

In this scenario, we have two agents: an Assistant Agent, which is well-versed in translating languages, and a User-Proxy Agent representing a user who needs help with a translation. The challenge is not just translating words, but also understanding the cultural context to ensure accurate and culturally sensitive translations.

import autogen
# Define agent configurations
config_list = autogen.config_list_from_json(
    "OAI_CONFIG_LIST",
    filter_dict={
        "model": ["gpt-4", "gpt4", "gpt-4-32k", "gpt-4-32k-0314", "gpt-4-32k-v0314"],
    },
)

# Define a function for dynamic conversationd
# Create an assistant agent for translation
    assistant_translator = autogen.AssistantAgent(
        name="assistant_translator",
        llm_config={
            "temperature": 0.7,
            "config_list": config_list,
        },
    )

# Create a user proxy agent representing the user
    user = autogen.UserProxyAgent(
        name="user",
        human_input_mode="ALWAYS",
        code_execution_config={"work_dir": "user"},
    )123456bash

# Initiate a chat session with the 
#assistant for translation
    user.initiate_chat(assistant_translator, message=message)
    user.stop_reply_at_receive(assistant_translator)123bash

#Send a signal to the assistant for
#finalizing the translation
    user.send("Please provide a culturally sensitive translation.", assistant_translator)
    
# Return the last message received from the assistant return user.last_message()["content"]12345bash

# Create agents for the user and assistant
assistant_for_user = autogen.AssistantAgent(
    name="assistant_for_user",
    system_message="You are a language assistant. 
    Reply TERMINATE when the translation is complete.",
    llm_config={
        "timeout": 600,
        "seed": 42,
        "config_list": config_list,
        "temperature": 0.7,
        "functions": [
            {
                "name": "translate_with_cultural_context",
                "description": "Translate and ensure 
                cultural sensitivity.",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "message": {
                            "type": "string",
                            "description": "Text to translate 
                            with cultural sensitivity consideration."
                        }
                    },
                    "required": ["message"],
                }
            }
        ],
    }
)

# Create a user proxy agent representing the user
user = autogen.UserProxyAgent(
    name="user",
    human_input_mode="TERMINATE",
    max_consecutive_auto_reply=10,
    code_execution_config={"work_dir": "user"},
    function_map={"translate_with_cultural_context": translate_with_cultural_context},
)

# Translate a sentence with cultural sensitivity
user.initiate_chat(
    assistant_for_user,
    message="Translate the phrase 
    'Thank you' into a language that shows respect in the recipient's culture."
)

In this use case, the user initiates a conversation with a request for translation. The assistant attempts to provide the translation, but when cultural sensitivity is required, it calls the “translate_with_cultural_context” function to interact with the user, who might have cultural insights. This dynamic conversation ensures that translations are not just accurate linguistically but also culturally appropriate.

Versatility Across Multiple Applications

  • Code Generation, Execution, and Debugging
  • Multi-Agent Collaboration (>3 Agents)
  • Applications
  • Tool Use
  • Agent Teaching and Learning

Enhanced Inference

AutoGen provides enhanced language model (LLM) inference capabilities. It includes autogen.OpenAIWrapper for openai>=1 and autogen.Completion, which can be used as a replacement for openai.Completion and openai.ChatCompletion with added features for openai<1. Using AutoGen for inference offers various advantages, including performance tuning, API unification, caching, error handling, multi-config inference, result filtering, templating, and more.

Tune Inference Parameters (for openai<1)

When working with foundation models for text generation Strategic AI Team Building, the overall cost is often linked to the number of tokens used in both input and output. From the perspective of an application developer utilizing these models, the goal is to maximize the usefulness of the generated text while staying within a set budget for inference. Achieving this optimization involves adjusting specific hyperparameters that can significantly influence both the quality of the generated text and its cost.

  1. Model Selection: It is necessary to specify the model ID you wish to use, which greatly influences the quality and style of the text generated.
  2. Prompt or Messages: These are the initial inputs that set the context for text generation. They serve as the starting point for the model to generate text.
  3. Maximum Token Limit (Max_tokens): This parameter determines the maximum word or word piece count in the generated text. It helps manage the length of the output.
  4. Temperature Control: Temperature, on a scale from 0 to 1, influences the level of randomness in the generated text. Higher values result in more diversity, while lower values make the text more predictable.
  5. Top Probability (Top_p): This value, also ranging from 0 to 1, affects the likelihood of choosing tokens. Lower values prioritize common tokens, while higher values encourage the model to explore a broader range.
  6. Number of Responses (N): N denotes how many responses the model generates for a given prompt. Having multiple responses can yield diverse outputs but comes with increased costs.
  7. Stop Conditions: Stop conditions are specific words or phrases that, when encountered in the generated text, halt the generation process. They are useful for controlling output length and content.

These hyperparameters are interconnected, and their combinations can have complex effects on the cost and quality of the generated text.

Using AutoGen for Tuning

 You can utilize AutoGen to tune your LLM based on:

  • Validation Data : Collect diverse instances to validate the effectiveness of your tuning process. These instances are typically stored as dictionaries, each containing problem descriptions and solutions.
  • Evaluation Function : Create an evaluation function to assess the quality of responses based on validation data. This function takes a list of responses and other inputs from the validation data and outputs metrics, such as success.
  • Metric to Optimize : Choose a metric to optimize, usually based on aggregated metrics across the validation data. For instance, you can optimize for “success” with different optimization modes.
  • Search Space : Define the search space for each hyperparameter. For example, specify the model, prompt/messages, max_tokens, and other parameters, either as constants or using predefined search ranges.
  • Budgets : Set budgets for inference and optimization.Strategic Strategic AI Team Building the inference budget pertains to the average cost per data instance, and the optimization budget determines the total budget allocated for the tuning process.

To perform tuning, use autogen.Completion.tune, which will return the optimized configuration and provide insights into all the tried configurations and results.

API Unification

autogen.OpenAIWrapper.create() to create completions for both chat and non-chat models, as well as for both OpenAI API and Azure OpenAI API. This unifies API usage for different models and endpoints.

Caching

API call results are cached locally for reproducibility and cost savings. You can control caching behavior by specifying a seed.

Error Handling

Strategic AI Team Building, AutoGen allows you to mitigate runtime errors by passing a list of configurations of different models/endpoints. It will try different configurations until a valid result is returned, which can be beneficial when rate limits are a concern.

Templating

Templates in prompts and messages can be automatically populated with context, making it more convenient to work with dynamic content.

Logging

AutoGen provides logging features for API calls, enabling you to track and analyze the history of API requests and responses for debugging and analysis. You can switch between compact and individual API call logging formats.

These capabilities make AutoGen a valuable tool for fine-tuning and optimizing LLM inference to suit your specific requirements and constraints.

Conclusion

In this journey through AutoGen, we’ve unveiled the blueprint for a future where human-AI collaboration knows no bounds. This multi-agent conversation framework empowers us to assemble our personalized AI dream teams, erasing the lines between humans and machines. AutoGen propels us into a realm of limitless possibilities. It streamlines complex tasks, maximizes the potential of LLM models, and enables the development of the next generation of Strategic AI Team Building applications. As we conclude, the question is not “if” but “how” you’ll embark on your own AutoGen-powered journey and embrace a world where collaboration is truly boundless. Start building, start innovating, and unlock the potential of AutoGen today!

Key Takeaways

  • AutoGen introduces a new era where you can create your personalized AI dream team, composed of conversable agents skilled in various domains, working seamlessly together.
  • AutoGen streamlines complex tasks and automates workflows, making it a powerful tool for orchestrating and optimizing tasks involving Large Language Models (LLMs).
  • Managing API keys and sensitive data securely is paramount when working with AutoGen. It’s essential to follow best practices to protect your information.
  • The config_list is a crucial component, enabling agents to adapt and excel in various tasks by efficiently handling multiple configurations and interactions with the OpenAI API.

Frequently Asked Questions

Q1:  Can I use AutoGen for dynamic conversations?

A: Yes, Strategic AI Team Building is designed for dynamic conversation patterns. It supports features like registered auto-reply and LLM-based function calls, allowing for adaptable and responsive conversations.

Q2: What makes AutoGen a valuable tool for developers and AI enthusiasts?

A: Strategic AI Team Building simplifies the development of advanced AI applications, making it accessible for developers to harness the power of multi-agent conversations. It empowers users to build their personalized AI teams, fostering collaboration between humans and machines.

Q3: Is there a secure way to manage API keys with AutoGen?

A: Yes, it’s essential to manage API keys securely. AutoGen provides guidelines on obtaining and securely storing API keys, including using environment variables, text files, or .env files to protect sensitive data.

Q4: How can I get started with AutoGen and create my personalized AI team?

A:  To get started with Strategic AI Team Building, AutoGen, refer to the provided guidelines in this blog, set up your development environment, and explore the diverse applications and conversation patterns it offers. The possibilities are boundless.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Hello! I'm Divya, "I have a deep passion for constantly expanding my knowledge and skills in the ever-evolving world of technology. Also love to share my knowledge and insights with others, fostering a culture of continuous learning and growth.

Responses From Readers

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details