OpenAI’s latest model, o3-mini, is revolutionizing coding tasks with its advanced reasoning, problem-solving, and code generation capabilities. It efficiently handles complex queries and integrates structured data, setting a new standard in AI applications. This article explores using o3-mini and CrewAI to build a Retrieval-Augmented Generation (RAG) research assistant agent that retrieves information from multiple PDFs and processes user queries intelligently. We will use CrewAI’s CrewDoclingSource, SerperDevTool, and OpenAI’s o3-mini to enhance automation in research workflows.
With the overwhelming amount of research being published, an automated RAG-based assistant can help researchers quickly find relevant insights without manually skimming through hundreds of papers. The agent we are going to build will process PDFs to extract key information and answer queries based on the content of the documents. If the required information isn’t found in the PDFs, it will automatically perform a web search to provide relevant insights. This setup can be extended for more advanced tasks, such as summarizing multiple papers, detecting contradictory findings, or generating structured reports.
In this hands-on guide, we will build a research agent that will go through articles on DeepSeek-R1 and o3-mini, to answer queries we ask about these models. For building this research assistant agent, we will first go through the prerequisites and set up the environment. We will then import the necessary modules, set the API keys, and load the research documents. Then, we will go on to define the AI model and integrate the web search tool into it. Finally, we will create he AI agents, define their tasks, and assemble the crew. Once ready, we’ll run the research assistant agent to find out if o3-mini is better and safer than DeepSeek-R1.
Before diving into the implementation, let’s briefly go over what we need to get started. Having the right setup ensures a smooth development process and avoids unnecessary interruptions.
So, ensure you have:
With these in place, we are ready to start building!
First, we need to install the necessary libraries. These libraries provide the foundation for the document processing, AI agent orchestration, and web search functionalities.
!pip install crewai
!pip install 'crewai[tools]'
!pip install docling
These libraries play a crucial role in building an efficient AI-powered research assistant.
import os
from crewai import LLM, Agent, Crew, Task
from crewai_tools import SerperDevTool
from crewai.knowledge.source.crew_docling_source import CrewDoclingSource
In this,
os.environ['OPENAI_API_KEY'] = 'your_openai_api_key'
os.environ['SERPER_API_KEY'] = 'your_serper_api_key'
These API keys allow access to AI models and web search capabilities.
In this step, we will load the research papers from arXiv, enabling our AI model to extract insights from them. The selected papers cover key topics:
content_source = CrewDoclingSource(
file_paths=[
"https://arxiv.org/pdf/2501.12948",
"https://arxiv.org/pdf/2501.18438",
"https://arxiv.org/pdf/2401.02954"
],
)
Now we will define the AI model.
llm = LLM(model="o3-mini", temperature=0)
To enhance research capabilities, we integrate a web search tool that retrieves relevant academic papers when the required information is not found in the provided documents.
serper_tool = SerperDevTool(
search_url="https://google.serper.dev/scholar",
n_results=2 # Fetch top 2 results
)
This specifies the Google Scholar search API endpoint.It ensures that searches are performed specifically in scholarly articles, research papers, and academic sources, rather than general web pages.
This parameter limits the number of search results returned by the tool, ensuring that only the most relevant information is retrieved. In this case, it is set to fetch the top two research papers from Google Scholar, prioritizing high-quality academic sources. By reducing the number of results, the assistant keeps responses concise and efficient, avoiding unnecessary information overload while maintaining accuracy.
To efficiently retrieve relevant information from documents, we use an embedding model that converts text into numerical representations for similarity-based search.
embedder = {
"provider": "openai",
"config": {
"model": "text-embedding-ada-002",
"api_key": os.environ['OPENAI_API_KEY']
}
}
The embedder in CrewAI is used for converting text into numerical representations (embeddings), enabling efficient document retrieval and semantic search. In this case, the embedding model is provided by OpenAI, specifically using “text-embedding-ada-002”, a well-optimized model for generating high-quality embeddings. The API key is retrieved from the environment variables to authenticate requests.
CrewAI supports multiple embedding providers, including OpenAI and Gemini (Google’s AI models), allowing flexibility in choosing the best model based on accuracy, performance, and cost considerations.
Now we will create the two AI Agents required for our researching task: the Document Search Agent, and the Web Search Agent.
The Document Search Agent is responsible for retrieving answers from the provided research papers and documents. It acts as an expert in analyzing technical content and extracting relevant insights. If the required information is not found, it can delegate the query to the Web Search Agent for further exploration. The allow_delegation=True setting enables this delegation process.
doc_agent = Agent(
role="Document Searcher",
goal="Find answers using provided documents. If unavailable, delegate to the Search Agent.",
backstory="You are an expert in analyzing research papers and technical blogs to extract insights.",
verbose=True,
allow_delegation=True, # Allows delegation to the search agent
llm=llm,
)
The Web Search Agent, on the other hand, is designed to search for missing information online using Google Scholar. It steps in only when the Document Search Agent fails to find an answer in the available documents. Unlike the Document Search Agent, it cannot delegate tasks further (allow_delegation=False). It uses Serper (Google Scholar API) as a tool to fetch relevant academic papers and ensure accurate responses.
search_agent = Agent(
role="Web Searcher",
goal="Search for the missing information online using Google Scholar.",
backstory="When the research assistant cannot find an answer, you step in to fetch relevant data from the web.",
verbose=True,
allow_delegation=False,
tools=[serper_tool],
llm=llm,
)
Now we will create the two tasks for the agents.
The first task involves answering a given question using available research papers and documents.
task1 = Task(
description="Answer the following question using the available documents: {question}. "
"If the answer is not found, delegate the task to the Web Search Agent.",
expected_output="A well-researched answer from the provided documents.",
agent=doc_agent,
)
The next task comes into play when the document-based search does not yield an answer.
task2 = Task(
description="If the document-based agent fails to find the answer, perform a web search using Google Scholar.",
expected_output="A web-searched answer with relevant citations.",
agent=search_agent,
)
The Crew in CrewAI manages agents to complete tasks efficiently by coordinating the Document Search Agent and Web Search Agent. It first searches within the uploaded documents and delegates to web search if needed.
crew = Crew(
agents=[doc_agent, search_agent],
tasks=[task1, task2],
verbose=True,
knowledge_sources=[content_source],
embedder=embedder
)
The initial query is directed to the document to check if the researcher agent can provide a response. The question being asked is “O3-MINI vs DEEPSEEK-R1: Which one is safer?”
question = "O3-MINI VS DEEPSEEK-R1: WHICH ONE IS SAFER?"
result = crew.kickoff(inputs={"question": question})
print("Final Answer:\n", result)
Response:
Here, we can see that the final answer is generated by the Document Searcher, as it successfully located the required information within the provided documents.
Here, the question “Which one is better, O3 Mini or DeepSeek R1?” is not available in the document. The system will check if the Document Search Agent can find an answer; if not, it will delegate the task to the Web Search Agent
question = "Which one is better O3 Mini or DeepSeek R1?"
result = crew.kickoff(inputs={"question": question})
print("Final Answer:\n", result)
Response:
From the output, we observe that the response was generated using the Web Searcher Agent since the required information was not found by the Document Researcher Agent. Additionally, it includes the sources from which the answer was finally retrieved.
In this project, we successfully built an AI-powered research assistant that efficiently retrieves and analyzes information from research papers and the web. By using CrewAI for agent coordination, Docling for document processing, and Serper for scholarly search, we created a system capable of answering complex queries with structured insights.
The assistant first searches within documents and seamlessly delegates to web search if needed, ensuring accurate responses. This approach enhances research efficiency by automating information retrieval and analysis. Additionally, by integrating the o3-mini research assistant with CrewAI’s CrewDoclingSource and SerperDevTool, we further enhanced the system’s document analysis capabilities. With further customization, this framework can be expanded to support more data sources, advanced reasoning, and improved research workflows.
You can explore amazing projects featuring OpenAI o3-mini in our free course – Getting Started with o3-mini!
A. CrewAI is a framework that allows you to create and manage AI agents with specific roles and tasks. It enables collaboration between multiple AI agents to automate complex workflows.
A. CrewAI uses a structured approach where each agent has a defined role and can delegate tasks if needed. A Crew object orchestrates these agents to complete tasks efficiently.
A. CrewDoclingSource is a document processing tool in CrewAI that extracts structured knowledge from research papers, PDFs, and text-based documents.
A. Serper API is a tool that allows AI applications to perform Google Search queries, including searches on Google Scholar for academic papers.
A. Serper API offers both free and paid plans, with limitations on the number of search requests in the free tier.
A. Unlike standard Google Search, Serper API provides structured access to search results, allowing AI agents to extract relevant research papers efficiently.
A. Yes, it supports common research document formats, including PDFs and text-based files.