Multi-agent systems (MAS) represent a transformative approach in artificial intelligence, where multiple autonomous agents collaborate to solve complex problems and achieve shared goals. These systems excel in scenarios requiring specialization and adaptability, making them ideal for various applications, from automated trading to multi-robot coordination. With the advent of GripTape, creating multi-agent systems has become more accessible than ever. GripTape simplifies the development process by providing a robust framework that allows developers to easily design, manage, and scale their agent-based applications, enabling seamless communication and coordination among agents.
Griptape is a modular Python framework designed for developing AI applications that utilize language models (LLMs). Its architecture is built around several core components, which facilitate the creation of flexible and scalable workflows. GripTape sets itself apart through its modular design, innovative Off-Prompt™ technology, strong integration capabilities with LLMs, comprehensive documentation, community support, and flexibility in use cases. These features collectively enhance the development experience and empower users to create robust AI applications efficiently.
AI agents are specialized programs or models designed to perform tasks autonomously using LLMs, often mimicking human decision-making, reasoning, and learning. They interact with users or systems, learn from data, adapt to new information, and execute specific functions within a defined scope, like customer support, process automation, or complex data analysis. With GripTape, the creation of a multiagent system becomes seamless and easy.
At the heart of GripTape are its core components, which work together to create a flexible, efficient, and powerful environment for developers.
Drivers and Engines: Various drivers facilitate interactions with external resources, including prompt drivers, embedding drivers, SQL drivers, and web search drivers.
Observability and Rulesets: Supports event tracking and logging for performance monitoring, while rulesets guide LLM behaviour with minimal prompt engineering.
Griptape’s architecture is built around modular components that allow developers to create flexible and scalable applications. The core structures include agents, pipelines, and work lines amongst others.
Tasks are the fundamental building blocks within Griptape, enabling interaction with various engines and tools. Griptape provides a variety of built-in tools, such as Web Scraper Tools, File Manager Tools, and Prompt Summary Tools. Developers can also create custom tools tailored to their specific needs.
Griptape features advanced memory management capabilities that enhance user interactions. Conversation Memory helps retain and retrieve information across interactions. Task Memory keeps large or sensitive outputs off the prompt sent to the LLM, preventing token limit overflow. Meta Memory allows passing additional metadata to the LLM, improving context relevance.
Griptape includes various drivers that facilitate interactions with external resources, including Prompt Drivers that manage textual interactions with LLMs, and Embedding Drivers that generate vector embeddings from text. Engines wrap these drivers to provide use-case-specific functionalities, such as the RAG Engine for enhanced retrieval capabilities.
The RAG Engine is a key component of Griptape that facilitates modular Retrieval-Augmented Generation pipelines. It allows applications to retrieve relevant information from external sources and combine it with generative capabilities, resulting in more accurate and contextually aware outputs.
While both Griptape and LangChain offer frameworks for implementing RAG pipelines, they differ significantly in design philosophy and functionality.
Griptape’s memory management is particularly innovative due to its Task Memory feature. This allows large outputs to be stored separately from prompts sent to LLMs. In contrast, LangChain typically manages memory differently without this specific focus on separating task outputs from prompts.
Griptape provides an extensive range of built-in tools tailored for various tasks (e.g., web scraping, and file management). Developers can easily create custom tools as needed. While LangChain also supports custom components, its out-of-the-box tooling may not be as diverse as what Griptape offers.
In the following steps, we will explore how we can enable the automation of a blog to be sent to potential Real Estate Buyers in Gurgaon by integrating a multi-agentic system using GripTape.
!pip install "griptape[all]" -U
from duckduckgo_search import DDGS
from griptape.artifacts import TextArtifact
from griptape.drivers import LocalStructureRunDriver
from griptape.rules import Rule
from griptape.structures import Agent, Pipeline, Workflow
from griptape.tasks import CodeExecutionTask, PromptTask, StructureRunTask
from griptape.drivers import GoogleWebSearchDriver, LocalStructureRunDriver
from griptape.rules import Rule, Ruleset
from griptape.structures import Agent, Workflow
from griptape.tasks import PromptTask, StructureRunTask
from griptape.tools import (
PromptSummaryTool,
WebScraperTool,
WebSearchTool,
)
from griptape.drivers import DuckDuckGoWebSearchDriver
import os
os.environ["OPENAI_API_KEY"]=''
WRITERS = [
{
"role": "Luxury Blogger",
"goal": "Inspire Luxury with stories of royal things that people aspire",
"backstory": "You bring aspirational and luxurious things to your audience through vivid storytelling and personal anecdotes.",
},
{
"role": "Lifestyle Freelance Writer",
"goal": "Share practical advice on living a balanced and stylish life",
"backstory": "From the latest trends in home decor to tips for wellness, your articles help readers create a life that feels both aspirational and attainable.",
},
]
We define two writers here – luxury bloggers and lifestyle freelance writers. We are defining the goal and backstory here as we would want this system to roll out blogs on real estate updates of Gurgaon from the perspective of luxury and lifestyle.
def build_researcher() -> Agent:
"""Builds a Researcher Structure."""
return Agent(
id="researcher",
tools=[
WebSearchTool(
web_search_driver=DuckDuckGoWebSearchDriver(),
),
WebScraperTool(
off_prompt=True,
),
PromptSummaryTool(off_prompt=False),
],
rulesets=[
Ruleset(
name="Position",
rules=[
Rule(
value="Lead Real Estate Analyst",
)
],
),
Ruleset(
name="Objective",
rules=[
Rule(
value="Discover Real Estate advancements in and around Delhi NCR",
)
],
),
Ruleset(
name="Background",
rules=[
Rule(
value="""You are part of a Real Estate Brokering Company.
Your speciality is spotting new trends in Real Estate for buyers and sells.
You excel at analyzing intricate data and delivering practical insights."""
)
],
),
Ruleset(
name="Desired Outcome",
rules=[
Rule(
value="Comprehensive analysis report in list format",
)
],
),
],
)
The function `build_researcher()` creates an `Agent` object, which represents a virtual researcher with specialized tools and rulesets.
def build_writer(role: str, goal: str, backstory: str) -> Agent:
"""Builds a Writer Structure.
Args:
role: The role of the writer.
goal: The goal of the writer.
backstory: The backstory of the writer.
"""
return Agent(
id=role.lower().replace(" ", "_"),
rulesets=[
Ruleset(
name="Position",
rules=[
Rule(
value=role,
)
],
),
Ruleset(
name="Objective",
rules=[
Rule(
value=goal,
)
],
),
Ruleset(
name="Backstory",
rules=[Rule(value=backstory)],
),
Ruleset(
name="Desired Outcome",
rules=[
Rule(
value="Full blog post of at least 4 paragraphs",
)
],
),
],
)
The function `build_writer()` creates an `Agent` object representing a writer with customizable attributes based on the provided inputs:
This function generates a writer agent tailored to a specific role, objective, and background.
team = Workflow()
research_task = team.add_task(
StructureRunTask(
(
"""Perform a detailed examination of the newest developments in Real Estate Updates in gurgaon as of 2025.
Pinpoint major trends, new upcoming properties and any projections.""",
),
id="research",
structure_run_driver=LocalStructureRunDriver(
create_structure=build_researcher,
),
),
)
writer_tasks = team.add_tasks(
*[
StructureRunTask(
(
"""Using insights provided, develop an engaging blog
post that highlights the most significant real estate updates of Gurgaon.
Your post should be informative yet accessible, catering to a general audience.
Make it sound cool, avoid complex words.
Insights:
{{ parent_outputs["research"] }}""",
),
structure_run_driver=LocalStructureRunDriver(
create_structure=lambda writer=writer: build_writer(
role=writer["role"],
goal=writer["goal"],
backstory=writer["backstory"],
)
),
parent_ids=[research_task.id],
)
for writer in WRITERS
]
)
end_task = team.add_task(
PromptTask(
'State "All Done!"',
parent_ids=[writer_task.id for writer_task in writer_tasks],
)
)
This code defines a workflow involving multiple tasks using a `Workflow` object. Here’s a breakdown of its components:
In summary, this code sets up a multi-step workflow with a research task, multiple writing tasks (for different writers), and a final task that marks the completion of the process. Each task depends on the completion of the previous one, ensuring a sequential and structured execution.
team.run()
As we can see the Real Estate Researcher Agent, it has extracted many key points, in List Format, Gurgaon Real Estate market. It has identified upcoming projects, and developments along with other characteristics like current trends, buyer preferences, Growth & Infrastructure etc.
As we can see from the output, the Luxury blogger agent has used the extracted information and crafted it well to highlight the luxury property details of Gurgaon. From utility point of view, this blog can be forwarded to all potential buyers looking for buying luxury apartments in this automated way with or without any Human Intervention.
As we can see from the output, the Freelance Writer agent has used the extracted information and crafted it well to highlight how the new real estate developments can enhance the lifestyle of the people as well. From utility point of view, this blog can be forwarded to all potential buyers looking for buying real estate in this automated way with or without any Human Intervention.
Following is another Python Implementation of executing a Retrieval Augmented Generation System using GripTape. The modular architecture of GripTape makes it very easy and seamless to integrate an RAG system.
import requests
from griptape.chunkers import TextChunker
from griptape.drivers import LocalVectorStoreDriver, OpenAiChatPromptDriver, OpenAiEmbeddingDriver
from griptape.engines.rag import RagEngine
from griptape.engines.rag.modules import PromptResponseRagModule, VectorStoreRetrievalRagModule
from griptape.engines.rag.stages import ResponseRagStage, RetrievalRagStage
from griptape.loaders import PdfLoader
from griptape.structures import Agent
from griptape.tools import RagTool
from griptape.utils import Chat
import os
os.environ['OPENAI_API_KEY'] = ''
#Defining a Namespace
namespace = "Phi4"
response = requests.get("https://arxiv.org/pdf/2412.08905")
#Defining Vector Store, Engine, Tool
vector_store = LocalVectorStoreDriver(embedding_driver=OpenAiEmbeddingDriver())
engine = RagEngine(
retrieval_stage=RetrievalRagStage(
retrieval_modules=[
VectorStoreRetrievalRagModule(
vector_store_driver=vector_store, query_params={"namespace": namespace, "top_n": 20}
)
]
),
response_stage=ResponseRagStage(
response_modules=[PromptResponseRagModule(prompt_driver=OpenAiChatPromptDriver(model="gpt-4o"))]
),
)
rag_tool = RagTool(
description="Contains information about the Phi4 model "
"Use it to answer any related questions.",
rag_engine=engine,
)
artifacts = PdfLoader().parse(response.content)
chunks = TextChunker().chunk(artifacts)
vector_store.upsert_text_artifacts({namespace: chunks})
Step 4. Loading Data, Chunking and Appending to Vector Store
agent = Agent(tools=[rag_tool])
agent.run("What is the post training method in Phi 4?")
As we can see from the above output, it has correctly retrieved all the post-training methods mentioned in the Phi 4 paper like Pivotal Token Search, Judge Guided DPO among others.
Also, from the code perspective, it can be seen how in a very few lines, we could set up this RAG engine owing to the modular format of GripTape. This is one of the highlights of GripTape that sets it apart from the other frameworks
GripTape’s modular architecture and core components provide a robust foundation for developing flexible and scalable AI applications. With features like advanced memory management, customizable tools, and integration capabilities, it offers significant advantages for developers looking to build sophisticated workflows. By emphasizing modularity and task-specific components, GripTape ensures a high level of customization and efficiency, setting it apart from other frameworks in the AI development landscape.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.
A. GripTape’s modular design allows developers to create highly customizable and flexible workflows by combining distinct components like agents, pipelines, and workflows. This architecture offers greater flexibility in application development compared to other frameworks like LangChain.
A. GripTape uses advanced memory management features, including Conversation Memory, Task Memory, and Meta Memory. These capabilities help retain context across interactions, prevent token overflow by keeping large outputs separate, and enhance the relevance of interactions by passing additional metadata.
A. GripTape provides a wide range of built-in tools, such as web scraping, file management, and prompt summarization tools. Developers can also easily create custom tools to meet specific needs, making it highly adaptable for different use cases.
A. GripTape’s Off-Prompt™ technology enhances memory and task output management by keeping large or sensitive data separate from the prompt sent to the language model, preventing token overflow.
A. GripTape includes various drivers for facilitating interactions with external resources. These include Prompt Drivers for managing textual interactions, Embedding Drivers for generating vector embeddings, SQL Drivers, and Web Search Drivers for integrating external data sources.