Building Scalable Multi-Agent Systems(MAS) Using GripTape

Nibedita Dutta Last Updated : 20 Jan, 2025
11 min read

Multi-agent systems (MAS) represent a transformative approach in artificial intelligence, where multiple autonomous agents collaborate to solve complex problems and achieve shared goals. These systems excel in scenarios requiring specialization and adaptability, making them ideal for various applications, from automated trading to multi-robot coordination. With the advent of GripTape, creating multi-agent systems has become more accessible than ever. GripTape simplifies the development process by providing a robust framework that allows developers to easily design, manage, and scale their agent-based applications, enabling seamless communication and coordination among agents.

Learning Objectives

  • GripTape’s modular architecture, core components, and key features, and compare it with LangChain.
  • We will demonstrate how to automate the distribution of a blog to potential real estate buyers in Gurgaon using a multi-agent system integrated with GripTape.
  • Additionally, we will showcase a Python implementation of a Retrieval-Augmented Generation (RAG) system, highlighting how GripTape’s modularity makes it easy and seamless to integrate such systems for automation.

Modularity at its Best with GripTape

Griptape is a modular Python framework designed for developing AI applications that utilize language models (LLMs). Its architecture is built around several core components, which facilitate the creation of flexible and scalable workflows. GripTape sets itself apart through its modular design, innovative Off-Prompt™ technology, strong integration capabilities with LLMs, comprehensive documentation, community support, and flexibility in use cases. These features collectively enhance the development experience and empower users to create robust AI applications efficiently.

AI agents are specialized programs or models designed to perform tasks autonomously using LLMs, often mimicking human decision-making, reasoning, and learning. They interact with users or systems, learn from data, adapt to new information, and execute specific functions within a defined scope, like customer support, process automation, or complex data analysis. With GripTape, the creation of a multiagent system becomes seamless and easy.

Core Components of Griptape

At the heart of GripTape are its core components, which work together to create a flexible, efficient, and powerful environment for developers.

Structures:

  • Agents: Single-task entities that perform specific functions.
  • Pipelines: Organize a sequence of tasks, allowing the output from one task to flow into the next.
  • Workflows: Configure multiple tasks to operate in parallel
  • Tasks: The fundamental units within structures that enable interaction with engines, tools, and other Griptape components.
  • Tools: Provide capabilities for LLMs to interact with data and services. Griptape includes various built-in tools and allows for easy creation of custom tools.

Memory: 

  • Conversation Memory: Retains and retrieves information across interactions. 
  • Task Memory: Keeps large or sensitive outputs off the prompt sent to the LLM.
  • Meta Memory: Passes additional metadata to enhance context and relevance in interactions.  

Drivers and Engines:  Various drivers facilitate interactions with external resources, including prompt drivers, embedding drivers, SQL drivers, and web search drivers.

Observability and Rulesets:  Supports event tracking and logging for performance monitoring, while rulesets guide LLM behaviour with minimal prompt engineering.

Key Features of Griptape

Key Features of GripTape
Key Features of GripTape

1. Modular Architecture

Griptape’s architecture is built around modular components that allow developers to create flexible and scalable applications. The core structures include agents, pipelines, and work lines amongst others.

2. Tasks and Tools

Tasks are the fundamental building blocks within Griptape, enabling interaction with various engines and tools. Griptape provides a variety of built-in tools, such as Web Scraper Tools, File Manager Tools, and Prompt Summary Tools. Developers can also create custom tools tailored to their specific needs.

3. Memory Management

Griptape features advanced memory management capabilities that enhance user interactions. Conversation Memory helps retain and retrieve information across interactions. Task Memory keeps large or sensitive outputs off the prompt sent to the LLM, preventing token limit overflow. Meta Memory allows passing additional metadata to the LLM, improving context relevance.

4. Drivers and Engines

Griptape includes various drivers that facilitate interactions with external resources, including Prompt Drivers that manage textual interactions with LLMs, and Embedding Drivers that generate vector embeddings from text. Engines wrap these drivers to provide use-case-specific functionalities, such as the RAG Engine for enhanced retrieval capabilities.

The RAG Engine is a key component of Griptape that facilitates modular Retrieval-Augmented Generation pipelines. It allows applications to retrieve relevant information from external sources and combine it with generative capabilities, resulting in more accurate and contextually aware outputs.

Comparing Griptape vs. LangChain

While both Griptape and LangChain offer frameworks for implementing RAG pipelines, they differ significantly in design philosophy and functionality.

Architecture Differences

  • Modularity: Griptape emphasizes a highly modular architecture that allows developers to create custom workflows easily. Each component (agents, pipelines, workflows) serves a distinct purpose, allowing for greater flexibility in application design.
  • Integration Approach: LangChain also supports modularity but focuses more on chaining together components in a linear fashion. While it provides flexibility through its chains of thought approach, it may not offer the same level of customization as Griptape’s structures.

Memory Management

Griptape’s memory management is particularly innovative due to its Task Memory feature. This allows large outputs to be stored separately from prompts sent to LLMs. In contrast, LangChain typically manages memory differently without this specific focus on separating task outputs from prompts.

Tooling Flexibility

Griptape provides an extensive range of built-in tools tailored for various tasks (e.g., web scraping, and file management). Developers can easily create custom tools as needed. While LangChain also supports custom components, its out-of-the-box tooling may not be as diverse as what Griptape offers.

Hands-On Python Implementation of a Multi-Agentic System using GripTape

In the following steps, we will explore how we can enable the automation of a blog to be sent to potential Real Estate Buyers in Gurgaon by integrating a multi-agentic system using GripTape.

Step 1. Installing Necessary Libraries

!pip install "griptape[all]" -U

Step 2. Importing Necessary Libraries & Defining OpenAI Keys

from duckduckgo_search import DDGS
from griptape.artifacts import TextArtifact
from griptape.drivers import LocalStructureRunDriver
from griptape.rules import Rule
from griptape.structures import Agent, Pipeline, Workflow
from griptape.tasks import CodeExecutionTask, PromptTask, StructureRunTask

from griptape.drivers import GoogleWebSearchDriver, LocalStructureRunDriver
from griptape.rules import Rule, Ruleset
from griptape.structures import Agent, Workflow
from griptape.tasks import PromptTask, StructureRunTask
from griptape.tools import (
    PromptSummaryTool,
    WebScraperTool,
    WebSearchTool,
)
from griptape.drivers import DuckDuckGoWebSearchDriver
import os
os.environ["OPENAI_API_KEY"]=''

Step 3. Defining Writer & Researcher Agents

Defining the Writers

WRITERS = [
    {
        "role": "Luxury Blogger",
        "goal": "Inspire Luxury with stories of royal things that people aspire",
        "backstory": "You bring aspirational and luxurious things to your audience through vivid storytelling and personal anecdotes.",
    },
    {
        "role": "Lifestyle Freelance Writer",
        "goal": "Share practical advice on living a balanced and stylish life",
        "backstory": "From the latest trends in home decor to tips for wellness, your articles help readers create a life that feels both aspirational and attainable.",
    },
]

We define two writers here – luxury bloggers and lifestyle freelance writers. We are defining the goal and backstory here as we would want this system to roll out blogs on real estate updates of Gurgaon from the perspective of luxury and lifestyle.

Defining the Researcher Agent

def build_researcher() -> Agent:
    """Builds a Researcher Structure."""
    return Agent(
        id="researcher",
        tools=[
            WebSearchTool(
                web_search_driver=DuckDuckGoWebSearchDriver(),
            ),
            WebScraperTool(
                off_prompt=True,
            ),
            PromptSummaryTool(off_prompt=False),
        ],
        rulesets=[
            Ruleset(
                name="Position",
                rules=[
                    Rule(
                        value="Lead Real Estate Analyst",
                    )
                ],
            ),
            Ruleset(
                name="Objective",
                rules=[
                    Rule(
                        value="Discover Real Estate advancements in and around Delhi NCR",
                    )
                ],
            ),
            Ruleset(
                name="Background",
                rules=[
                    Rule(
                        value="""You are part of a Real Estate Brokering Company.
                        Your speciality is spotting new trends in Real Estate for buyers and sells.
                        You excel at analyzing intricate data and delivering practical insights."""
                    )
                ],
            ),
            Ruleset(
                name="Desired Outcome",
                rules=[
                    Rule(
                        value="Comprehensive analysis report in list format",
                    )
                ],
            ),
        ],
    )

The function `build_researcher()` creates an `Agent` object, which represents a virtual researcher with specialized tools and rulesets.

  1. Tools: The agent is equipped with three tools:
    • WebSearchTool uses a DuckDuckGo web search driver for retrieving online data.
    • WebScraperTool with an option to turn off prompts for scraping data.
    • PromptSummaryTool for summarizing results
  2. Rulesets: The agent’s behaviour is governed by four rulesets:
    • Position: Specifies the role as “Lead Real Estate Analyst”.
    • Objective: Direct the agent to discover real estate advancements around Delhi NCR.
    • Background: Defines the agent’s background in real estate, with expertise in spotting trends and providing insights.
  3. Desired Outcome: Requests the agent to produce a comprehensive analysis report in a list format. This setup defines a well-structured virtual researcher for real estate analysis.

Defining the Writer Agent

def build_writer(role: str, goal: str, backstory: str) -> Agent:
    """Builds a Writer Structure.

    Args:
        role: The role of the writer.
        goal: The goal of the writer.
        backstory: The backstory of the writer.
    """
    return Agent(
        id=role.lower().replace(" ", "_"),
        rulesets=[
            Ruleset(
                name="Position",
                rules=[
                    Rule(
                        value=role,
                    )
                ],
            ),
            Ruleset(
                name="Objective",
                rules=[
                    Rule(
                        value=goal,
                    )
                ],
            ),
            Ruleset(
                name="Backstory",
                rules=[Rule(value=backstory)],
            ),
            Ruleset(
                name="Desired Outcome",
                rules=[
                    Rule(
                        value="Full blog post of at least 4 paragraphs",
                    )
                ],
            ),
        ],
    )

The function `build_writer()` creates an `Agent` object representing a writer with customizable attributes based on the provided inputs:

  1. Arguments:
    • role: The writer’s role (e.g., “Content Writer”).
    • goal: The writer’s goal (e.g., “Write an informative article on real estate trends”).
    • backstory: The writer’s background (e.g., “Experienced journalist with a focus on property news”).
  2. Agent Setup
    • ID: The agent’s ID is generated by converting the role to lowercase and replacing spaces with underscores (e.g., “content_writer”).
    • Rulesets: The agent is defined by four rulesets:
      • Position: The writer’s role.
      • Objective: The writer’s goal.
      • Backstory: The writer’s background story.
      • Desired Outcome: Specifies the desired output, which is a full blog post of at least 4 paragraphs.

This function generates a writer agent tailored to a specific role, objective, and background.

Step 4. Defining Tasks

team = Workflow()
research_task = team.add_task(
        StructureRunTask(
            (
                """Perform a detailed examination of the newest developments in Real Estate Updates in gurgaon as of 2025.
                Pinpoint major trends, new upcoming properties and any projections.""",
            ),
            id="research",
            structure_run_driver=LocalStructureRunDriver(
                create_structure=build_researcher,
            ),
        ),
    )

writer_tasks = team.add_tasks(
        *[
            StructureRunTask(
                (
                    """Using insights provided, develop an engaging blog
                post that highlights the most significant real estate updates of Gurgaon.
                Your post should be informative yet accessible, catering to a general audience.
                Make it sound cool, avoid complex words.

                Insights:
                {{ parent_outputs["research"] }}""",
                ),
                structure_run_driver=LocalStructureRunDriver(
                    create_structure=lambda writer=writer: build_writer(
                        role=writer["role"],
                        goal=writer["goal"],
                        backstory=writer["backstory"],
                    )
                ),
                parent_ids=[research_task.id],
            )
            for writer in WRITERS
        ]
    )

end_task = team.add_task(
        PromptTask(
            'State "All Done!"',
            parent_ids=[writer_task.id for writer_task in writer_tasks],
        )
    )

This code defines a workflow involving multiple tasks using a `Workflow` object. Here’s a breakdown of its components:

  1. team = Workflow():  A new workflow object `team` is created to manage and coordinate the sequence of tasks.
  2. Research Task:
    • research_task = team.add_task(…): A task is added to the workflow for researching real estate updates in Gurgaon for 2025.
    • StructureRunTask: This task involves running a structure with a description to examine real estate trends, pinpoint new properties, and projections.
    • LocalStructureRunDriver(create_structure=build_researcher): The `build_researcher` function is used to create the structure for this task, ensuring the task is focused on gathering research insights.
  3. Writer Tasks:
    • writer_tasks = team.add_tasks(…): Multiple writer tasks are added, one for each writer in the `WRITERS` list.
    • StructureRunTask: Each task uses the insights gathered in the research task to write a blog post about real estate developments in Gurgaon.
    • parent_ids=[research_task.id]: These tasks depend on the completion of the research task, which provides the required insights for writing.
    • structure_run_driver=LocalStructureRunDriver(create_structure=lambda writer=writer: build_writer(…)): The `build_writer` function is used to create the writing structure, dynamically using the writer’s role, goal, and backstory.
  4. End Task:
    • end_task = team.add_task(PromptTask(…)): A final task is added to the workflow, which will output “All Done!” after all writer tasks are completed.
    • parent_ids=[writer_task.id for writer_task in writer_tasks]: This task depends on the completion of all writer tasks.

In summary, this code sets up a multi-step workflow with a research task, multiple writing tasks (for different writers), and a final task that marks the completion of the process. Each task depends on the completion of the previous one, ensuring a sequential and structured execution.

Step 5. Executing the Task

team.run()

Outputs

Output From Real Estate Researcher Agent

Output
Output

As we can see the Real Estate Researcher Agent, it has extracted many key points, in List Format, Gurgaon Real Estate market. It has identified upcoming projects, and developments along with other characteristics like current trends, buyer preferences, Growth & Infrastructure etc.

Output From Luxury Blogger Agent

Output

As we can see from the output, the Luxury blogger agent has used the extracted information and crafted it well to highlight the luxury property details of Gurgaon. From utility point of view, this blog can be forwarded to all potential buyers looking for buying luxury apartments in this automated way with or without any Human Intervention.

Output From Lifestyle Freelance Writer Agent

Output
Output

As we can see from the output, the Freelance Writer agent has used the extracted information and crafted it well to highlight how the new real estate developments can enhance the lifestyle of the people as well. From utility point of view, this blog can be forwarded to all potential buyers looking for buying real estate in this automated way with or without any Human Intervention.

Following is another Python Implementation of executing a Retrieval Augmented Generation System using GripTape. The modular architecture of GripTape makes it very easy and seamless to integrate an RAG system.

Hands-On Python Implementation for RAG using GripTape

Step 1. Importing Necessary Libraries & Defining OpenAI Keys

import requests
from griptape.chunkers import TextChunker
from griptape.drivers import LocalVectorStoreDriver, OpenAiChatPromptDriver, OpenAiEmbeddingDriver
from griptape.engines.rag import RagEngine
from griptape.engines.rag.modules import PromptResponseRagModule, VectorStoreRetrievalRagModule
from griptape.engines.rag.stages import ResponseRagStage, RetrievalRagStage
from griptape.loaders import PdfLoader
from griptape.structures import Agent
from griptape.tools import RagTool
from griptape.utils import Chat
import os
os.environ['OPENAI_API_KEY'] = ''

Step 2. Defining Tools & Engines

#Defining a Namespace
namespace = "Phi4"
response = requests.get("https://arxiv.org/pdf/2412.08905")

#Defining Vector Store, Engine, Tool
vector_store = LocalVectorStoreDriver(embedding_driver=OpenAiEmbeddingDriver())
engine = RagEngine(
    retrieval_stage=RetrievalRagStage(
        retrieval_modules=[
            VectorStoreRetrievalRagModule(
                vector_store_driver=vector_store, query_params={"namespace": namespace, "top_n": 20}
            )
        ]
    ),
    response_stage=ResponseRagStage(
        response_modules=[PromptResponseRagModule(prompt_driver=OpenAiChatPromptDriver(model="gpt-4o"))]
    ),
)
rag_tool = RagTool(
    description="Contains information about the Phi4 model "
    "Use it to answer any related questions.",
    rag_engine=engine,
)

Step 3. Loading Data, Chunking and Appending to Vector Store

artifacts = PdfLoader().parse(response.content)
chunks = TextChunker().chunk(artifacts)

vector_store.upsert_text_artifacts({namespace: chunks})

Step 4. Loading Data, Chunking and Appending to Vector Store

agent = Agent(tools=[rag_tool])

agent.run("What is the post training method in Phi 4?")

Output

Output

As we can see from the above output, it has correctly retrieved all the post-training methods mentioned in the Phi 4 paper like Pivotal Token Search, Judge Guided DPO among others.

Also, from the code perspective, it can be seen how in a very few lines, we could set up this RAG engine owing to the modular format of GripTape. This is one of the highlights of GripTape that sets it apart from the other frameworks

Conclusion

GripTape’s modular architecture and core components provide a robust foundation for developing flexible and scalable AI applications. With features like advanced memory management, customizable tools, and integration capabilities, it offers significant advantages for developers looking to build sophisticated workflows. By emphasizing modularity and task-specific components, GripTape ensures a high level of customization and efficiency, setting it apart from other frameworks in the AI development landscape.

Key Takeaways

  • GripTape has a modular framework. Developers can build scalable AI apps. They combine agents, pipelines, and workflows.
  • It offers advanced memory management. Features include Conversation Memory, Task Memory, and Meta Memory. These manage context and sensitive data. They prevent token overflow.
  • GripTape provides customizable tools. It has built-in tools and supports custom ones. This improves LLM interactions with external data.
  • It includes an efficient RAG Engine. It retrieves external information. This combines with LLM capabilities for accurate outputs.
  • GripTape supports seamless integration. It works with drivers like prompt, embedding, and SQL. This adapts to various use cases.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Frequently Asked Questions

Q1. What makes GripTape’s modular architecture unique?

A. GripTape’s modular design allows developers to create highly customizable and flexible workflows by combining distinct components like agents, pipelines, and workflows. This architecture offers greater flexibility in application development compared to other frameworks like LangChain.

Q2. How does GripTape manage memory during AI interactions?

A. GripTape uses advanced memory management features, including Conversation Memory, Task Memory, and Meta Memory. These capabilities help retain context across interactions, prevent token overflow by keeping large outputs separate, and enhance the relevance of interactions by passing additional metadata.

Q3. What types of tools are available in GripTape?

A. GripTape provides a wide range of built-in tools, such as web scraping, file management, and prompt summarization tools. Developers can also easily create custom tools to meet specific needs, making it highly adaptable for different use cases.

Q4. What is GripTape’s Off-Prompt™ technology?

A. GripTape’s Off-Prompt™ technology enhances memory and task output management by keeping large or sensitive data separate from the prompt sent to the language model, preventing token overflow.

Q5. What types of drivers does GripTape include?

A. GripTape includes various drivers for facilitating interactions with external resources. These include Prompt Drivers for managing textual interactions, Embedding Drivers for generating vector embeddings, SQL Drivers, and Web Search Drivers for integrating external data sources.

Nibedita completed her master’s in Chemical Engineering from IIT Kharagpur in 2014 and is currently working as a Senior Data Scientist. In her current capacity, she works on building intelligent ML-based solutions to improve business processes.

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details