AI agents are designed to act autonomously, solving problems and executing tasks in dynamic environments. A key feature in Autogen, enabling their adaptability is AutoGen’s code executors. This feature along with LLMs enables AI agents to generate, evaluate, and execute code in real-time. This capability bridges the gap between static AI models and actionable intelligence. By automating workflows, performing data analysis, and debugging complex systems, it transforms agents from mere thinkers into effective doers. In this article, we will learn more about code executors in AutoGen and how to implement them.
AutoGen has three kinds of code executors that can be used for different purposes.
These Code Executors can be run on both the host machine (local) as well as the Docker containers.
Also Read: 4 Steps to Build Multi-Agent Nested Chats with AutoGen
Now let’s learn how you can use these different code executors in AutoGen:
Before building AI agents, ensure you have the necessary API keys for the required LLMs.
Load the .env file with the API keys needed.
from dotenv import load_dotenv
load_dotenv(./env)
autogen-agentchat – 0.2.38
jupyter_kernel_gateway-3.0.1
Let’s build an AI agent to know the offers and discounts available on an e-commerce website using the command line executor. Here are the steps to follow.
1. Import the necessary libraries.
from autogen import ConversableAgent, AssistantAgent, UserProxyAgent
from autogen.coding import LocalCommandLineCodeExecutor, DockerCommandLineCodeExecutor
2. Define the agents.
user_proxy = UserProxyAgent(
name="User",
llm_config=False,
is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"],
human_input_mode="TERMINATE",
code_execution_config=False
)
code_writer_agent = ConversableAgent(
name="CodeWriter",
system_message="""You are a Python developer.
You use your coding skill to solve problems.
Once the task is done, returns 'TERMINATE'.""",
llm_config={"config_list": [{"model": "gpt-4o-mini"}]},
)
local_executor = LocalCommandLineCodeExecutor(
timeout=15,
work_dir='./code files')
local_executor_agent = ConversableAgent(
"local_executor_agent",
llm_config=False,
code_execution_config={"executor": local_executor},
human_input_mode="ALWAYS",
)
We are using the ‘local_executor’ in the code_execution_config of the local_executor_agent.
3. Define the messages which are used to initialize the chat.
messages = ["""To check whether there are any offers or discounts available on a given e-commerce website -
https://www.flipkart.com/
Follow these steps,
1. download the html page of the given URL
2. we only need html content, so remove any CSS, JavaScript, and Image tags content
3. save the remaining html content.
""" ,
"read the text and list all the offers and discounts available"]
# Intialize the chat
chat_result = local_executor_agent.initiate_chat(
code_writer_agent,
message=messages[0],
)
It will ask for human input after each message from the codeWriter agent. You just need to press the ‘Enter’ key to execute the code written by the agent. We can also any further instructions if there is any problem with the code.
Here are the questions we have asked and the output at the end.
As we can see, with the mentioned questions, we can get a list of offers and discounts from an e-commerce website.
Also Read: Hands-on Guide to Building Multi-Agent Chatbots with AutoGen
By using this, we can access the variables defined in one code block from another code block, unlike the command line executor.
Now, let’s try to build an ML model using this.
1. Import the additional methods.
from autogen.coding.jupyter import LocalJupyterServer, DockerJupyterServer, JupyterCodeExecutor
from pathlib import Path
2. Initialize the jupyter server and output directory.
server = LocalJupyterServer()
output_dir = Path("coding")
output_dir.mkdir()
Note that LocalJupyterServer may not function on Windows due to a bug. In this case, you can use the DockerJupyterServer instead or use the EmbeddedIPythonCodeExecutor.
3. Define the executor agent and writer agent with a custom system message.
jupyter_executor_agent = ConversableAgent(
name="jupyter_executor_agent",
llm_config=False,
code_execution_config={
"executor": JupyterCodeExecutor(server, output_dir=output_dir),
},
human_input_mode="ALWAYS",
)
code_writer_system_message = """
You have been given coding capability to solve tasks using Python code in a stateful IPython kernel.
You are responsible for writing the code, and the user is responsible for executing the code.
When you write Python code, put the code in a markdown code block with the language set to Python.
For example:
```python
x = 3
```
You can use the variable `x` in subsequent code blocks.
```python
print(x)
```
Always use print statements for the output of the code.
Write code incrementally and leverage the statefulness of the kernel to avoid repeating code.
Import libraries in a separate code block.
Define a function or a class in a separate code block.
Run code that produces output in a separate code block.
Run code that involves expensive operations like download, upload, and call external APIs in a separate code block.
When your code produces an output, the output will be returned to you.
Because you have limited conversation memory, if your code creates an image,
the output will be a path to the image instead of the image itself."""
code_writer_agent = ConversableAgent(
"code_writer",
system_message=code_writer_system_message,
llm_config={"config_list": [{"model": "gpt-4o"}]},
human_input_mode="TERMINATE",
)
4. Define the initial message and initialize the chat
message = "read the datasets/user_behavior_dataset.csv and print what the data is about"
chat_result = jupyter_executor_agent.initiate_chat(
code_writer_agent,
message=message,
)
# Once the chat is completed we can stop the server.
server.stop()
5. Once the chat is completed we can stop the server.
We can print the messages as follows
for chat in chat_result.chat_history[:]:
if chat['name'] == 'code_writer' and 'TERMINATE' not in chat['content']:
print("--------agent-----------")
print(chat['content'])
if chat['name'] == 'jupyter_executor_agent' and 'exitcode' not in chat['content']:
print("--------user------------")
print(chat['content'])
Here’s the sample
As we can see, we can get the code generated by the agent and also the results after executing the code.
Also Read: Building Agentic Chatbots Using AutoGen
Now, let’s try to create a custom executor that can run the code in the same jupyter notebook where we are creating this executor. So, we can read a CSV file, and then ask an agent to build an ML model on the already imported file.
Here’s how we’ll do it.
1. Import the necessary libraries.
import pandas as pd
from typing import List
from IPython import get_ipython
from autogen.coding import CodeBlock, CodeExecutor, CodeExtractor, CodeResult, MarkdownCodeExtractor
2. Define the executor that can extract and run the code from jupyter cells.
class NotebookExecutor(CodeExecutor):
@property
def code_extractor(self) -> CodeExtractor:
# Extact code from markdown blocks.
return MarkdownCodeExtractor()
def __init__(self) -> None:
# Get the current IPython instance running in this notebook.
self._ipython = get_ipython()
def execute_code_blocks(self, code_blocks: List[CodeBlock]) -> CodeResult:
log = ""
for code_block in code_blocks:
result = self._ipython.run_cell("%%capture --no-display cap\n" + code_block.code)
log += self._ipython.ev("cap.stdout")
log += self._ipython.ev("cap.stderr")
if result.result is not None:
log += str(result.result)
exitcode = 0 if result.success else 1
if result.error_before_exec is not None:
log += f"\n{result.error_before_exec}"
exitcode = 1
if result.error_in_exec is not None:
log += f"\n{result.error_in_exec}"
exitcode = 1
if exitcode != 0:
break
return CodeResult(exit_code=exitcode, output=log)
3. Define the agents.
code_writer_agent = ConversableAgent(
name="CodeWriter",
system_message="You are a helpful AI assistant.\n"
"You use your coding skill to solve problems.\n"
"You have access to a IPython kernel to execute Python code.\n"
"You can suggest Python code in Markdown blocks, each block is a cell.\n"
"The code blocks will be executed in the IPython kernel in the order you suggest them.\n"
"All necessary libraries have already been installed.\n"
"Add return or print statements to the code to get the output\n"
"Once the task is done, returns 'TERMINATE'.",
llm_config={"config_list": [{"model": "gpt-4o-mini"}]},
)
code_executor_agent = ConversableAgent(
name="CodeExecutor",
llm_config=False,
code_execution_config={"executor": NotebookExecutor()},
is_termination_msg=lambda msg: "TERMINATE" in msg.get("content", "").strip().upper(),
human_input_mode="ALWAYS"
)
4. Read the file and initiate the chat with the file.
df = pd.read_csv('datasets/mountains_vs_beaches_preferences.csv')
chat_result = code_executor_agent.initiate_chat(
code_writer_agent,
message="What are the column names in the dataframe defined above as df?",
)
5. We can print the chat history as follows:
for chat in chat_result.chat_history[:]:
if chat['name'] == 'CodeWriter' and 'TERMINATE' not in chat['content']:
print("--------agent-----------")
print(chat['content'])
if chat['name'] == 'CodeExecutor' and 'exitcode' not in chat['content']:
print("--------user------------")
print(chat['content'])
As we can see again, we can get the code generated by the agent and also the results after executing the code.
AutoGen’s code executors provide flexibility and functionality for AI agents to perform real-world tasks. The command line executor enables script execution, while the Jupyter code executor supports iterative development. Custom executors, on the other hand, allow developers to create tailored workflows.
These tools empower AI agents to transition from problem solvers to solution implementers. Developers can use these features to build intelligent systems that deliver actionable insights and automate complex processes.
A. Code Executors in AutoGen allow AI agents to generate, execute, and evaluate code in real time. This enables agents to automate tasks, perform data analysis, debug systems, and implement dynamic workflows.
A. The Command Line Executor saves and executes code as separate files, ideal for tasks like file management and script execution. The Jupyter Code Executor operates in a stateful environment, allowing reuse of variables and selective re-execution of code blocks, making it more suitable for iterative coding tasks like building ML models.
A. Yes, both the Command Line Executor and Jupyter Code Executor can be configured to run on Docker containers, providing a flexible environment for execution.
A. Custom Code Executors allow developers to define specialized execution logic, such as running code within the same Jupyter notebook. This is useful for tasks requiring a high level of integration or customization.
A. Before using Code Executors, ensure you have the necessary API keys for your preferred LLMs. You should also have the required libraries, such as `autogen-agentchat` and `jupyter_kernel_gateway`, installed in your environment.