Llama-Agents: A Comprehensive Guide to Multi-Agent Systems as a Service

Abhishek Kumar Last Updated : 02 Jul, 2024
10 min read

Introduction

An “agent” is like a smart assistant who can think and make decisions independently. When you give it a task or ask a question, it doesn’t just follow a script—it understands what you need, figures out the best way to handle it, and then gives you the right answer or completes the task.

For instance, imagine you ask an agent to find the best route to a new restaurant. The agent will consider various factors like traffic conditions, distance, and travel time and then provide you with the optimal route. It’s like having a personal assistant who’s always available, capable of learning from experience, and gets better at helping you over time. Similarly, “Llama-Agents” serves as your definitive resource, exploring how these intelligent agents can be deployed as a service to enhance decision-making, improve efficiency, and drive innovation across various domains. Whether you are a researcher, developer, or industry professional, this guide provides the knowledge and tools you need to effectively understand and implement Llama-Agents: Scalable AI Solutions with Agents-as-a-Service solutions.

The key agent components can include these, but are not limited to:

  • Breaking down a complex question into smaller ones
  • Choosing an external Tool to use + coming up with parameters for calling the Tool
  • Planning out a set of tasks
  • Storing previously completed tasks in a memory module
Llama-Agents

Llama-agents: Agents-as-a-service

Llama-agents is an async-first framework for building, iterating, and productionizing multi-agent systems, including multi-agent communication, distributed tool execution, human-in-the-loop, and more!

Each agent is seen as a service endlessly processing incoming tasks in llama agents. Each agent pulls and publishes messages from a message queue.

The control plane is at the top of a llama-agents system. It tracks ongoing tasks and services in the network and decides which service should handle the next task step using an orchestrator.

The overall system layout of Llama-Agents is pictured below.

Components of a Llama-agents System

In Llama-agents, several key components make up the overall system:

  • Message queue: The message queue acts as a queue for all services and the control plane. It has publishing methods to name queues and delegate messages to consumers.
  • Control plane: The control plane is the central gateway to the llama-agents system. It tracks current tasks and services registered to the system and holds the orchestrator.
  • Orchestrator: The module handles incoming tasks and decides what service to send them to and how to handle results from services. An orchestrator can be agentic (with an LLM making decisions), explicit (with a query pipeline defining a flow), a mix of both, or completely custom.
  • Services: Services are where the actual work happens. A service accepts some incoming task and context, processes it, and publishes a result
  • Agent Service: A tool service is a special service used to offload the computation of agent tools. Agents can instead be equipped with a meta-tool that calls the tool service.

Key Features of `Llama-agents`

Here are the key features of Llama-agents:

  • Distributed Architecture: Each agent operates as an independent microservice.
  • Standardized Communication: Seamless interaction through a central control plane.
    Flexible Orchestration: Define explicit flows or rely on our smart orchestrator.
  • Easy Deployment: Effortlessly launch, scale, and monitor agents.
  • Scalable Performance: Monitor system and agent performance with our observability tools.

Comparison of Llama Agents with other Multi-Agent Frameworks

CrewAI

CrewAI focuses on collaborative AI, enabling multiple agents to work together towards common goals. It is designed to facilitate complex multi-agent environments.

Key Features

  • Collaboration Focused: Specializes in environments where multiple agents must interact and cooperate.
  • Robust Simulation Environment: Provides a rich simulation environment for testing and training collaborative agents.
  • Advanced Communication Protocols: Implements advanced protocols for inter-agent communication and coordination.
  • Scenario-Based Training: Allows scenario-based training and testing, which is crucial for developing robust collaborative strategies.

Autogen

AutoGen is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes, employing combinations of LLMs, human inputs, and tools.

Key Features

  • Real-Time Adaptability: Agents can learn and adapt in real time to changing environments.
  • Automated Agent Generation: Automates the creation and training of agents, reducing the manual effort required.
  • Dynamic Environment Handling: Optimized for environments that change frequently and unpredictably.
  • Self-Improving Agents: Agents continuously improve their performance through iterative learning.

Also read: Strategic Team Building with AutoGen AI

LangGraph

LangGraph enables us to create stateful, multi-actor applications utilizing LLMs as easily as possible. It extends LangChain’s capabilities, introducing the ability to create and manage cyclical graphs, which are pivotal for developing sophisticated agent runtimes.

Key Features

  • Graph-Based Interaction: Utilizes a graph-based approach for defining and managing agent interactions.
  • Modularity: Offers modular components for flexibility in agent design.
  • Interoperability: Facilitates integration with other systems through standardized interfaces.
  • Scalability: Supports scaling up agent networks efficiently.

What’s Different in Llama Agents and Other Frameworks?

  • Llama Agents stands out with its async-first design, making it highly efficient for building and managing complex multi-agent systems that require real-time communication and human oversight.
  • CrewAI vs. Llama Agents: CrewAI is primarily designed for collaborative scenarios with advanced simulation environments, whereas Llama Agents is optimized for asynchronous task handling, real-time task execution, and orchestration with human-in-the-loop capabilities.
  • Autogen vs. Llama Agents: Autogen emphasizes real-time adaptability and continuous learning, automating agent creation and training. In contrast, Llama Agents provides more control over agent customization and orchestration, focusing on efficient asynchronous operations and human oversight.
  • The Llama Agents framework excels in tasks that demand a structured and managed approach, especially in crafting sequences. In contrast, the LangGraph framework provides versatility and scalability, making it well-suited for agile and modular agent-driven operations.

Llama-Agents Installation

Llama agents can be installed with pip and rely mainly on llama-index-core:

pip install llama-agents

If you don’t already have llama-index installed, you’ll also need:

pip install llama-index-agent-openai

Basic Implementation of Llama Agents

The flow of Implementation will be like this:

  • Create Tool
  • Create Agent
  • Create Components
  • Llama kickoff

The quickest way to get started is to use an existing agent (or agents) and integrate them into a launcher.

Below is a simple example using two agents from llama-index.

First, let’s set up the agents and initial components for our llama-agents system:

Setting OPEN_API_KEY in WINDOWS

Set OPENAI_API_KEY = ”sk-XXXXXXX”

For Linux and macOS

export OPENAI_API_KEY="sk-XXXXX"

CODE:

from llama_agents import (

    AgentService,

    AgentOrchestrator,

    ControlPlaneServer,

    LocalLauncher,

    SimpleMessageQueue,

)

from llama_index.core.agent import FunctionCallingAgentWorker

from llama_index.core.tools import FunctionTool

from llama_index.llms.openai import OpenAI

# Create tool

def get_the_secret_fact() -> str:

    """Returns the secret fact."""

    return "The secret fact is: A baby llama is called a 'Cria'."

tool = FunctionTool.from_defaults(fn=get_the_secret_fact)

# Create Agents

worker1 = FunctionCallingAgentWorker.from_tools([tool], llm=OpenAI())

worker2 = FunctionCallingAgentWorker.from_tools([], llm=OpenAI())

agent1 = worker1.as_agent()

agent2 = worker2.as_agent()

# Create Key components

message_queue = SimpleMessageQueue()

control_plane = ControlPlaneServer(

    message_queue=message_queue,

    orchestrator=AgentOrchestrator(llm=OpenAI()),

)

agent_server_1 = AgentService(

    agent=agent1,

    message_queue=message_queue,

    description="Useful for getting the secret fact.",

    service_name="secret_fact_agent",

)

agent_server_2 = AgentService(

    agent=agent2,

    message_queue=message_queue,

    description="Useful for getting random dumb facts.",

    service_name="dumb_fact_agent",

)

# Llama Kickoff

launcher = LocalLauncher([agent_server_1, agent_server_2], control_plane, message_queue)

result = launcher.launch_single("What is the secret fact?")

print(f"Result: {result}")

We created a tool here and then assigned that tool to the agents then we created the key components like SimpleMessageQueue and control_plane, and finally Llama Kickoff.

Output

Llama-Agents

Here comes the difference between CrewAI and Llama Agents, as everything is almost similar. Now, we will host this as a service

We will modify the code slightly

CODE:

from llama_agents import (

    AgentService,

    HumanService,

    AgentOrchestrator,

    CallableMessageConsumer,

    ControlPlaneServer,

    ServerLauncher,

    SimpleMessageQueue,

    QueueMessage,

)

from llama_index.core.agent import FunctionCallingAgentWorker

from llama_index.core.tools import FunctionTool

from llama_index.llms.openai import OpenAI

# create an agent

def get_the_secret_fact() -> str:

    """Returns the secret fact."""

    return "The secret fact is: A baby llama is called a 'Cria'."

tool = FunctionTool.from_defaults(fn=get_the_secret_fact)

worker1 = FunctionCallingAgentWorker.from_tools([tool], llm=OpenAI())

worker2 = FunctionCallingAgentWorker.from_tools([], llm=OpenAI())

agent1 = worker1.as_agent()

agent2 = worker2.as_agent()

# create our multi-agent framework components

message_queue = SimpleMessageQueue()

queue_client = message_queue.client

control_plane = ControlPlaneServer(

    message_queue=queue_client,

    orchestrator=AgentOrchestrator(llm=OpenAI()),

)

agent_server_1 = AgentService(

    agent=agent1,

    message_queue=queue_client,

    description="Useful for getting the secret fact.",

    service_name="secret_fact_agent",

    host="127.0.0.1",

    port=8002,

)

agent_server_2 = AgentService(

    agent=agent2,

    message_queue=queue_client,

    description="Useful for getting random dumb facts.",

    service_name="dumb_fact_agent",

    host="127.0.0.1",

    port=8003,

)

human_service = HumanService(

    message_queue=queue_client,

    description="Answers queries about math.",

    host="127.0.0.1",

    port=8004,

)

# additional human consumer

def handle_result(message: QueueMessage) -> None:

    print("Got result:", message.data)

human_consumer = CallableMessageConsumer(handler=handle_result, message_type="human")

# launch it

launcher = ServerLauncher(

    [agent_server_1, agent_server_2, human_service],

    control_plane,

    message_queue,

    additional_consumers=[human_consumer],

)

launcher.launch_servers()

Almost everything is the same as before, but we have added human in the loop service and ports to the agent service.

Output

Llama-Agents

LLama Agents monitor

To monitor agents’ service, we will open a new terminal
llama-agents monitor –control-plane-url http://127.0.0.1:8000

Llama-Agents

Now, we will try to make Sequential and Hierarchical pipelines 

Sequential and Hierarchical pipelines 

Here are the pipelines:

Sequential

from llama_agents import (

    AgentService,

    ControlPlaneServer,

    SimpleMessageQueue,

    PipelineOrchestrator,

    ServiceComponent,

    LocalLauncher,

)

from llama_index.core.agent import FunctionCallingAgentWorker

from llama_index.core.tools import FunctionTool

from llama_index.core.query_pipeline import QueryPipeline

from llama_index.llms.openai import OpenAI

from llama_index.agent.openai import OpenAIAgent

# create an agent

def get_the_secret_fact() -> str:

    """Returns the secret fact."""

    return "The secret fact is: A baby llama is called a 'Cria'."

tool = FunctionTool.from_defaults(fn=get_the_secret_fact)

worker1 = FunctionCallingAgentWorker.from_tools([tool], llm=OpenAI())

# worker2 = FunctionCallingAgentWorker.from_tools([], llm=OpenAI())

agent1 = worker1.as_agent()

agent2 = OpenAIAgent.from_tools(

    [], system_prompt="Repeat the input with a silly fact added."

)  # worker2.as_agent()

# create our multi-agent framework components

message_queue = SimpleMessageQueue()

agent_server_1 = AgentService(

    agent=agent1,

    message_queue=message_queue,

    description="Useful for getting the secret fact.",

    service_name="secret_fact_agent",

)

agent_server_2 = AgentService(

    agent=agent2,

    message_queue=message_queue,

    description="Useful for getting random dumb facts.",

    service_name="dumb_fact_agent",

)

agent_component_1 = ServiceComponent.from_service_definition(agent_server_1)

agent_component_2 = ServiceComponent.from_service_definition(agent_server_2)

pipeline = QueryPipeline(

    chain=[

        agent_component_1,

        agent_component_2,

    ]

)

pipeline_orchestrator = PipelineOrchestrator(pipeline)

control_plane = ControlPlaneServer(message_queue, pipeline_orchestrator)

# launch it

launcher = LocalLauncher([agent_server_1, agent_server_2], control_plane, message_queue)

result = launcher.launch_single("What is the secret fact?")

print(f"Result: {result}")
Agents-as-a-Service


In the output, two responses from dumb_fact and secret_fact Agent are in a sequence.

Hierarchical

We are converting agent1 to a tool, which will be assigned to agent2. Then, we will create the agent service.

from llama_agents import (

    AgentService,

    ControlPlaneServer,

    SimpleMessageQueue,

    PipelineOrchestrator,

    ServiceComponent,

    LocalLauncher,

)

from llama_agents.tools import AgentServiceTool

from llama_index.core.agent import FunctionCallingAgentWorker

from llama_index.core.tools import FunctionTool

from llama_index.core.query_pipeline import QueryPipeline

from llama_index.llms.openai import OpenAI

from llama_index.agent.openai import OpenAIAgent

# create an agent

def get_the_secret_fact() -> str:

    """Returns the secret fact."""

    return "The secret fact is: A baby llama is called a 'Cria'."

tool = FunctionTool.from_defaults(fn=get_the_secret_fact)

worker1 = FunctionCallingAgentWorker.from_tools([tool], llm=OpenAI())

agent1 = worker1.as_agent()

# create our multi-agent framework components

message_queue = SimpleMessageQueue()

agent1_server = AgentService(

    agent=agent1,

    message_queue=message_queue,

    description="Useful for getting the secret fact.",

    service_name="secret_fact_agent",

)

agent1_server_tool = AgentServiceTool.from_service_definition(

    message_queue=message_queue, service_definition=agent1_server.service_definition

)

agent2 = OpenAIAgent.from_tools(

    [agent1_server_tool],

    system_prompt="Perform the task, return the result as well as a funny joke.",

)

agent2_server = AgentService(

    agent=agent2,

    message_queue=message_queue,

    description="Useful for telling funny jokes.",

    service_name="dumb_fact_agent",

)

agent2_component = ServiceComponent.from_service_definition(agent2_server)

pipeline = QueryPipeline(chain=[agent2_component])

pipeline_orchestrator = PipelineOrchestrator(pipeline)

control_plane = ControlPlaneServer(message_queue, pipeline_orchestrator)

# launch it

launcher = LocalLauncher([agent1_server, agent2_server], control_plane, message_queue)

result = launcher.launch_single("What is the secret fact?")

print(f"Result: {result}")
Agents-as-a-Service

Here, you can see in the result that the first agent, one, is showing the result as secret_fact_agent, and the second output is from the second agent, dum_fact.

Human in the Loop Service

Here, we are including a human component in the loop: 

from llama_agents import (

    AgentService,

    HumanService,

    ControlPlaneServer,

    SimpleMessageQueue,

    PipelineOrchestrator,

    ServiceComponent,

    LocalLauncher,

)

from llama_index.core.agent import FunctionCallingAgentWorker

from llama_index.core.tools import FunctionTool

from llama_index.core.query_pipeline import RouterComponent, QueryPipeline

from llama_index.llms.openai import OpenAI

from llama_index.core.selectors import PydanticSingleSelector

# create an agent

def get_the_secret_fact() -> str:

    """Returns the secret fact."""

    return "The secret fact is: A baby llama is called a 'Cria'."

tool = FunctionTool.from_defaults(fn=get_the_secret_fact)

# create our multi-agent framework components

message_queue = SimpleMessageQueue()

worker = FunctionCallingAgentWorker.from_tools([tool], llm=OpenAI())

agent = worker.as_agent()

agent_service = AgentService(

    agent=agent,

    message_queue=message_queue,

    description="Useful for getting the secret fact.",

    service_name="secret_fact_agent",

)

agent_component = ServiceComponent.from_service_definition(agent_service)

human_service = HumanService(

    message_queue=message_queue, description="Answers queries about math."

)

human_component = ServiceComponent.from_service_definition(human_service)

pipeline = QueryPipeline(

    chain=[

        RouterComponent(

            selector=PydanticSingleSelector.from_defaults(llm=OpenAI()),

            choices=[agent_service.description, human_service.description],

            components=[agent_component, human_component],

        )

    ]

)

pipeline_orchestrator = PipelineOrchestrator(pipeline)

control_plane = ControlPlaneServer(message_queue, pipeline_orchestrator)

# launch it

launcher = LocalLauncher([agent_service, human_service], control_plane, message_queue)

result = launcher.launch_single("What is 1 + 2 + 3 + 4 + 5?")

print(f"Result: {result}")
llama agents

You can see the output the agent is asking us to give the response, so we can implement other things like the agent server as a tool and query rewrite RAG, etc.

Conclusion

Agents are automated decision engines that process user inputs by breaking down queries, selecting tools, planning tasks, and storing completed tasks. The llama-agents framework excels in managing multi-agent systems with asynchronous task handling, real-time execution, and human oversight.

Llama agents include key components like the message queue, control plane, orchestrator, services, and tool services. Each agent operates independently, enabling scalable communication through a central control plane. Compared to CrewAI (focused on collaborative environments) and Autogen (emphasizing real-time adaptability), llama agents offer flexible orchestration and efficient asynchronous operations.

Installing and configuring llama agents involves creating tools, agents, and essential components. The system supports sequential and hierarchical pipelines and can include human-in-the-loop services, making it versatile for complex AI solutions. In summary, llama agents provide a robust, scalable framework for developing responsive multi-agent systems, making them valuable for sophisticated AI applications.

Frequently Asked Questions

Q1. What are agents of AI?

Ans. Agents of AI are automated reasoning and decision engines that process user inputs, make internal decisions, execute queries, and deliver accurate results. They can break down complex questions, choose the appropriate tools, plan tasks, and store completed tasks in a memory module.

Q2. Is ChatGPT an AI agent?

Ans. Yes, ChatGPT is an AI agent. It processes user inputs, generates responses based on large-scale language models, and provides information or assistance in various tasks, demonstrating automated reasoning and decision-making capabilities.

Q3. How to create an AI agent?

Ans. Creating an AI agent involves several steps:
1. Define the Agent’s Purpose: Determine the tasks or problems the agent will handle.
2. Select Tools and Models: Choose appropriate algorithms, machine learning models, and tools.
3. Develop the Agent: Write the code to implement the agent’s logic and integrate the tools and models.
4. Train the Agent: Use relevant data to train the agent, improving its performance over time.
5. Test and Deploy: Test the agent thoroughly and deploy it in the desired environment.
Frameworks like `llama-agents` can be used to streamline the process of building and deploying AI agents.

Q4. Is a human an AI agent?

Ans. No, a human is not an AI agent. While humans can perform reasoning and decision-making tasks, AI agents are specifically designed as automated systems that use artificial intelligence technologies to emulate certain aspects of human cognitive functions.

Hello, I'm Abhishek, a Data Engineer Trainee at Analytics Vidhya. I'm passionate about data engineering and video games I have experience in Apache Hadoop, AWS, and SQL,and I keep on exploring their intricacies and optimizing data workflows 

:)

Responses From Readers

Clear

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details