Don’t want to spend money on APIs, or are you concerned about privacy? Or do you just want to run LLMs locally? Don’t worry; this guide will help you build agents and multi-agent frameworks with local LLMs that are completely free. We’ll explore how to build a multi-agent system with CrewAI and Ollama and look at the multiple LLMs available from Ollama.
Generative AI has transitioned from basic large language models (LLM) to advanced multi-agent systems. In theory, Agents are autonomous systems capable of planning, reasoning, and acting without human input. These agents aim to reduce human involvement while expanding functionality.
Also Read: Top 5 Frameworks for Building AI Agents in 2024
These frameworks utilize multiple agents working in concert, allowing for collaboration, communication, and problem-solving that exceed the capabilities of single-use agents. In these frameworks, agents have distinct roles and goals and can perform complex tasks. Multi-Agentic Framework such as CrewAI and Ollama are essential for large-scale, dynamic, and distributed problem-solving, making them adaptable across industries like robotics, finance, healthcare, and beyond.
These frameworks enable modular and scalable systems, making modifying or adding agents to adapt to evolving requirements easy.
crewAI is an advanced multi-agentic framework, enabling multiple agents (called a “crew”) to collaborate through task orchestration. The framework divides agents into three attributes—role, goal, and backstory—ensuring a thorough understanding of each agent’s function. This structured approach mitigates under-specification risk, improving task definition and execution.
Also Read: Building Collaborative AI Agents With CrewAI
Ollama is a framework for building and running language models on local machines. It’s easy to use, as we can run models directly on devices without needing cloud-based services. There’s no concern about privacy.
To interact with Ollama:
We can run the pip install ollama
command to integrate Ollama with Python.
Now, we can download models with the ollama pull
command to download the models.
Let’s run these:
ollama pull llama2
ollama pull llama3
ollama pull llava
Now, we have 3 of the Large Language Models (LLMs) locally:
We can use these models locally by running ollama run model-name
, here’s an example:
You can press ctrl + d
to exit.
Also read: How to Run LLM Models Locally with Ollama?
Let’s work on building an Agentic system that takes an image as an input and gives a few interesting facts about the animal in the system.
By default, tasks are executed sequentially in CrewAI. You can add a task manager to control the order of execution. Additionally, the allow_delegation feature allows an agent to ask its preceding agent to regenerate a response if needed. Setting memory to True enables agents to learn from past interactions, and you can optionally configure tasks to ask for human feedback about the output.
Also read: Building Collaborative AI Agents With CrewAI
Before we start, let’s install all the necessary packages:
pip install crewai
pip install 'crewai[tools]'
pip install ollama
from crewai import Agent, Task, Crew
import pkg_resources
# Get the version of CrewAI
crewai_version = pkg_resources.get_distribution("crewai").version
print(crewai_version)
0.61.0
Here, we define three agents with specific roles and goals. Each agent is responsible for a task related to image classification and description.
# 1. Image Classifier Agent (to check if the image is an animal)
classifier_agent = Agent(
role="Image Classifier Agent",
goal="Determine if the image is of an animal or not",
backstory="""
You have an eye for animals! Your job is to identify whether the input image is of an animal
or something else.
""",
llm='ollama/llava:7b' # Model for image-related tasks
)
# 2. Animal Description Agent (to describe the animal in the image)
description_agent = Agent(
role="Animal Description Agent {image_path}",
goal="Describe the animal in the image",
backstory="""
You love nature and animals. Your task is to describe any animal based on an image.
""",
llm='ollama/llava:7b' # Model for image-related tasks
)
# 3. Information Retrieval Agent (to fetch additional info about the animal)
info_agent = Agent(
role="Information Agent",
goal="Give compelling information about a certain animal",
backstory="""
You are very good at telling interesting facts.
You don't give any wrong information if you don't know it.
""",
llm='ollama/llama2' # Model for general knowledge retrieval
)
Also Read: Agentic Frameworks for Generative AI Applications
Each task is tied to one of the agents. Tasks describe the input, the expected output, and which agent should handle it.
# Task 1: Check if the image is an animal
task1 = Task(
description="Classify the image ({image_path}) and tell me if it's an animal.",
expected_output="If it's an animal, say 'animal'; otherwise, say 'not an animal'.",
agent=classifier_agent
)
# Task 2: If it's an animal, describe it
task2 = Task(
description="Describe the animal in the image.({image_path})",
expected_output="Give a detailed description of the animal.",
agent=description_agent
)
# Task 3: Provide more information about the animal
task3 = Task(
description="Give additional information about the described animal.",
expected_output="Provide at least 5 interesting facts or information about the animal.",
agent=info_agent
)
A Crew is set up to manage the agents and tasks. It coordinates the tasks sequentially and provides the results based on the agents’ chains of thought.
# Crew to manage the agents and tasks
crew = Crew(
agents=[classifier_agent, description_agent, info_agent],
tasks=[task1, task2, task3],
verbose=True
)
# Execute the tasks with the provided image path
result = crew.kickoff(inputs={'image_path': 'racoon.jpg'})
I’ve given an image of a racoon to the crewAI framework, and this is the output that I got:
Note: Ensure the image is in the working directory, or you can give the full path.
OUTPUT
# Agent: Image Classifier Agent
## Task: Classify the image (racoon.jpg) and tell me if it's an animal.
# Agent: Image Classifier Agent
## Final Answer:
Based on my analysis, the image (racoon.jpg) contains a raccoon, which is
indeed an animal. Therefore, the final answer is 'animal'.
# Agent: Animal Description Agent racoon.jpg
## Task: Describe the animal in the image.(racoon.jpg)
# Agent: Animal Description Agent racoon.jpg
## Final Answer:
The image (racoon.jpg) features a raccoon, which is a mammal known for its
agility and adaptability to various environments. Raccoons are characterized
by their distinct black "mask" around the eyes and ears, as well as a
grayish or brownish coat with white markings on the face and paws. They have
a relatively short tail and small rounded ears. Raccoons are omnivorous and
have a highly dexterous front paw that they use to manipulate objects. They
are also known for their intelligence and ability to solve problems, such as
opening containers or climbing trees.
# Agent: Information Agent
## Task: Give additional information about the described animal.
# Agent: Information Agent
## Final Answer:
Here are 5 fascinating facts about the raccoon:
1. Raccoons have exceptional dexterity in their front paws, which they use to
manipulate objects with remarkable precision. In fact, studies have shown
that raccoons are able to open containers and perform other tasks with a
level of skill rivaling that of humans!
2. Despite their cute appearance, raccoons are formidable hunters and can
catch a wide variety of prey, including fish, insects, and small mammals.
Their sensitive snouts help them locate food in the dark waters or
underbrush.
3. Raccoons are highly adaptable and can be found in a range of habitats,
from forests to marshes to urban areas. They are even known to climb trees
and swim in water!
4. In addition to their intelligence and problem-solving skills, raccoons
have an excellent memory and are able to recognize and interact with
individual humans and other animals. They can also learn to perform tricks
and tasks through training.
5. Unlike many other mammals, raccoons do not hibernate during the winter
months. Instead, they enter a state of dormancy known as torpor, which
allows them to conserve energy and survive harsh weather conditions. During
this time, their heart rate slows dramatically, from around 70-80 beats per
minute to just 10-20!
I hope these interesting facts will provide a comprehensive understanding of
the fascinating raccoon species!
The classifier confirmed that it was an animal, and then the agent with the llava:7b model described the animal and image and sequentially passed it to the information agent. Despite the information agent using llama2, a text-based model, it could use the context from the previous agent and give information about a raccoon.
Also read: Building a Responsive Chatbot with Llama 3.1, Ollama and LangChain
Using multiple LLMs according to their strengths is good because different models excel at different tasks. We have used CrewAI and Ollama to showcase multi-agent collaboration and also used LLMs locally from Ollama. Yes, the Ollama models might be slower than cloud-based models for obvious reasons, but both have pros and cons. The effectiveness of the agentic framework depends on the workflows and the use of the right tools and LLMs to optimize the results.
Ans. When set to True, this crewAI parameter lets agents assign tasks to others, enabling complex task flows and collaboration.
Ans. crewAI uses Pydantic objects to define and validate task input/output data structures, ensuring agents receive and produce data in the expected format.
Ans. crewAI manages this by organizing agents and tasks into a ‘Crew’ object, coordinating tasks sequentially based on user-defined dependencies.
Ans. Yes, both support custom LLMs. For crewAI, specify the model path/name when creating an Agent. For Ollama, follow their docs to build and run custom models.