Concurrent Query Resolution System Using crewAI

seematiwari0116 . Last Updated : 28 Feb, 2025
8 min read

In the era of artificial intelligence, businesses are constantly seeking innovative ways to enhance customer support services. One such approach is leveraging AI agents that work collaboratively to resolve customer queries efficiently. This article explores the implementation of a Concurrent Query Resolution System using CrewAI, OpenAI’s GPT models, and Google Gemini. This system employs multiple specialized agents that operate in parallel to handle customer queries seamlessly, reducing response time and improving accuracy.

Learning Objectives

  • Understand how AI agents can efficiently handle customer queries by automating responses and summarizing key information.
  • Learn how CrewAI enables multi-agent collaboration to improve customer support workflows.
  • Explore different types of AI agents, such as query resolvers and summarizers, and their roles in customer service automation.
  • Implement concurrent query processing using Python’s asyncio to enhance response efficiency.
  • Optimize customer support systems by integrating AI-driven automation for improved accuracy and scalability.

This article was published as a part of the Data Science Blogathon.

How AI Agents Work Together?

The Concurrent Query Resolution System uses a multi-agent framework, assigning each agent a specific role. The system utilizes CrewAI, a framework that enables AI agents to collaborate effectively.

The primary components of the system include:

  • Query Resolution Agent: Responsible for understanding customer queries and providing accurate responses.
  • Summary Agent: Summarizes the resolution process for quick review and future reference.
  • LLMs (Large Language Models): Includes models like GPT-4o and Gemini, each with different configurations to balance speed and accuracy.
  • Task Management: Assigning tasks dynamically to agents to ensure concurrent query processing.

Implementation of Concurrent Query Resolution System

To transform the AI agent framework from concept to reality, a structured implementation approach is essential. Below, we outline the key steps involved in setting up and integrating AI agents for effective query resolution.

Step 1: Setting the API Key

The OpenAI API key is stored as an environment variable using the os module. This allows the system to authenticate API requests securely without hardcoding sensitive credentials.

import os 

# Set the API key as an environment variable
os.environ["OPENAI_API_KEY"] = ""

The system uses the os module to interact with the operating system.

The system sets the OPENAI_API_KEY as an environment variable, allowing it to authenticate requests to OpenAI’s API.

Step 2: Importing Required Libraries

Necessary libraries are imported, including asyncio for handling asynchronous operations and crewai components like Agent, Crew, Task, and LLM. These are essential for defining and managing AI agents.

import asyncio
from crewai import Agent, Crew, Task, LLM, Process
import google.generativeai as genai
  • asyncio: Python’s built-in module for asynchronous programming, enabling concurrent execution.
  • Agent: Represents an AI worker with specific responsibilities.
  • Crew: Manages multiple agents and their interactions.
  • Task: Defines what each agent is supposed to do.
  • LLM: Specifies the large language model used.
  • Process: It defines how tasks execute, whether sequentially or in parallel.
  • google.generativeai: Library for working with Google’s generative AI models (not used in this snippet, but likely included for future expansion).

Step 3: Initializing LLMs

Three different LLM instances (GPT-4o and GPT-4) are initialized with varying temperature settings. The temperature controls response creativity, ensuring a balance between accuracy and flexibility in AI-generated answers.

# Initialize the LLM with Gemini
llm_1 = LLM(
    model="gpt-4o",
    temperature=0.7)
llm_2 = LLM(
    model="gpt-4",
    temperature=0.2)
llm_3 = LLM(
    model="gpt-4o",
    temperature=0.3)

The system creates three LLM instances, each with a different configuration.

Parameters:

  • model: Specifies which OpenAI model to use (gpt-4o or gpt-4).
  • temperature: Controls randomness in responses (0 = deterministic, 1 = more creative).

These different models and temperatures help balance accuracy and creativity

Step 4: Defining AI Agents

Each agent has a specific role and predefined goals. Two AI agents are created:

  • Query Resolver: Handles customer inquiries and provides detailed responses.
  • Summary Generator: Summarizes the resolutions for quick reference.
    Each agent has a defined role, goal, and backstory to guide its interactions.

Query Resolution Agent

query_resolution_agent = Agent(
    llm=llm_1,
    role="Query Resolver",
    backstory="An AI agent that resolves customer queries efficiently and professionally.",
    goal="Resolve customer queries accurately and provide helpful solutions.",
    verbose=True
)

Let’s see what’s happening in this code block

  • Agent Creation: The query_resolution_agent is an AI-powered assistant responsible for resolving customer queries.
  • Model Selection: It uses llm_1, configured as GPT-4o with a temperature of 0.7. This balance allows for creative yet accurate responses.
  • Role: The system designates the agent as a Query Resolver.
  • Backstory: The developers program the agent to act as a professional customer service assistant, ensuring efficient and professional responses.
  • Goal: To provide accurate solutions to user queries.
  • Verbose Mode: verbose=True ensures detailed logs, helping developers debug and track its performance.

Summary Agent

summary_agent = Agent(
    llm=llm_2,
    role="Summary Generator",
    backstory="An AI agent that summarizes the resolution of customer queries.",
    goal="Provide a concise summary of the query resolution process.",
    verbose=True
)

What Happens Here?

  • Agent Creation: The summary_agent is designed to summarize query resolutions.
  • Model Selection: Uses llm_2 (GPT-4) with a temperature of 0.2, making its responses more deterministic and precise.
  • Role: This agent acts as a Summary Generator.
  • Backstory: It summarizes query resolutions concisely for quick reference.
  • Goal: It provides a clear and concise summary of how customer queries were resolved.
  • Verbose Mode: verbose=True ensures that debugging information is available if needed.

Step 5: Defining Tasks

The system dynamically assigns tasks to ensure parallel query processing.

This section defines tasks assigned to AI agents in the Concurrent Query Resolution System.

resolution_task = Task(
    description="Resolve the customer query: {query}.",
    expected_output="A detailed resolution for the customer query.",
    agent=query_resolution_agent
)

summary_task = Task(
    description="Summarize the resolution of the customer query: {query}.",
    expected_output="A concise summary of the query resolution.",
    agent=summary_agent
)

What Happens Here?

Defining Tasks:

  • resolution_task: This task instructs the Query Resolver Agent to analyze and resolve customer queries.
  • summary_task: This task instructs the Summary Agent to generate a brief summary of the resolution process.

Dynamic Query Handling:

  • The system replaces {query} with an actual customer query when executing the task.
  • This allows the system to handle any customer query dynamically.

Expected Output:

  • The resolution_task expects a detailed response to the query.
  • The summary_task generates a concise summary of the query resolution.

Agent Assignment:

  • The query_resolution_agent is assigned to handle resolution tasks.
  • The summary_agent is assigned to handle summarization tasks.

Why This Matters

  • Task Specialization: Each AI agent has a specific job, ensuring efficiency and clarity.
  • Scalability: You can add more tasks and agents to handle different types of customer support interactions.
  • Parallel Processing: Tasks can be executed concurrently, reducing customer wait times.

Step 6: Executing a Query with AI Agents

An asynchronous function is created to process a query. The Crew class organizes agents and tasks, executing them sequentially to ensure proper query resolution and summarization.

async def execute_query(query: str):
    crew = Crew(
        agents=[query_resolution_agent, summary_agent],
        tasks=[resolution_task, summary_task],
        process=Process.sequential,
        verbose=True
    )
    result = await crew.kickoff_async(inputs={"query": query})
    return result

This function defines an asynchronous process to execute a query. It creates a Crew instance, which includes:

  • agents: The AI agents involved in the process (Query Resolver and Summary Generator).
  • tasks: Tasks assigned to the agents (query resolution and summarization).
  • process=Process.sequential: Ensures tasks are executed in sequence.
  • verbose=True: Enables detailed logging for better tracking.

The function uses await to execute the AI agents asynchronously and returns the result.

Step 7: Handling Multiple Queries Concurrently

Using asyncio.gather(), multiple queries can be processed simultaneously. This reduces response time by allowing AI agents to handle different customer issues in parallel.

async def handle_two_queries(query_1: str, query_2: str):
    # Run both queries concurrently
    results = await asyncio.gather(
        execute_query(query_1),
        execute_query(query_2)
    )
    return results

This function executes two queries concurrently. asyncio.gather() processes both queries simultaneously, significantly reducing response time. The function returns the results of both queries once execution is complete

Step 8: Defining Example Queries

Developers define sample queries to test the system, covering common customer support issues like login failures and payment processing errors.

query_1 = "I am unable to log in to my account. It says 'Invalid credentials', but I am sure I am using the correct username and password."
query_2 = "The payment gateway is not working. Also, a weird error message is displayed. My card has been charged, but the transaction is not going through."

These are sample queries to test the system.

Query 1 deals with login issues, while Query 2 relates to payment gateway errors.

Step 9: Setting Up the Event Loop

The system initializes an event loop to handle asynchronous operations. If it doesn’t find an existing loop, it creates a new one to manage AI task execution.

try:
    loop = asyncio.get_event_loop()
except RuntimeError:
    loop = asyncio.new_event_loop()
    asyncio.set_event_loop(loop)

This section ensures that an event loop is available for running asynchronous tasks.

If the system detects no event loop (RuntimeError occurs), it creates a new one and sets it as the active loop.

Step 10: Handling Event Loops in Jupyter Notebook/Google Colab

Since Jupyter and Colab have pre-existing event loops, nest_asyncio.apply() is used to prevent conflicts, ensuring smooth execution of asynchronous queries.

# Check if the event loop is already running
if loop.is_running():
    # If the loop is running, use `nest_asyncio` to allow re-entrant event loops
    import nest_asyncio
    nest_asyncio.apply()

Jupyter Notebooks and Google Colab have pre-existing event loops, which can cause errors when running async functions.

nest_asyncio.apply() allows nested event loops, resolving compatibility issues.

Step 11: Executing Queries and Printing Results

The event loop runs handle_two_queries() to process queries concurrently. The system prints the final AI-generated responses, displaying query resolutions and summaries.

# Run the async function
results = loop.run_until_complete(handle_two_queries(query_1, query_2))

# Print the results
for i, result in enumerate(results):
    print(f"Result for Query {i+1}:")
    print(result)
    print("\n---\n")

loop.run_until_complete() starts the execution of handle_two_queries(), which processes both queries concurrently.

The system prints the results, displaying the AI-generated resolutions for each query.

output: Concurrent Query Resolution System
output: Concurrent Query Resolution System

Advantages of Concurrent Query Resolution System

Below, we will see how the Concurrent Query Resolution System enhances efficiency by processing multiple queries simultaneously, leading to faster response times and improved user experience.

  • Faster Response Time: Parallel execution resolves multiple queries simultaneously.
  • Improved Accuracy: Leveraging multiple LLMs ensures a balance between creativity and factual correctness.
  • Scalability: The system can handle a high volume of queries without human intervention.
  • Better Customer Experience: Automated summaries provide a quick overview of query resolutions.

Applications of Concurrent Query Resolution System

We will now explore the various applications of the Concurrent Query Resolution System, including customer support automation, real-time query handling in chatbots, and efficient processing of large-scale service requests.

  • Customer Support Automation: Enables AI-driven chatbots to resolve multiple customer queries simultaneously, reducing response time.
  • Real-Time Query Processing: Enhances live support systems by handling numerous queries in parallel, improving efficiency.
  • E-commerce Assistance: Streamlines product inquiries, order tracking, and payment issue resolutions in online shopping platforms.
  • IT Helpdesk Management: Supports IT service desks by diagnosing and resolving multiple technical issues concurrently.
  • Healthcare & Telemedicine: Assists in managing patient inquiries, appointment scheduling, and medical advice simultaneously.

Conclusion

The Concurrent Query Resolution System demonstrates how AI-driven multi-agent collaboration can revolutionize customer support. By leveraging CrewAI, OpenAI’s GPT models, and Google Gemini, businesses can automate query handling, improving efficiency and user satisfaction. This approach paves the way for more advanced AI-driven service solutions in the future.

Key Takeaways

  • AI agents streamline customer support, reducing response times.
  • CrewAI enables specialized agents to work together effectively.
  • Using asyncio, multiple queries are handled concurrently.
  • Different LLM configurations balance accuracy and creativity.
  • The system can manage high query volumes without human intervention.
  • Automated summaries provide quick, clear query resolutions.

Frequently Asked Questions

Q1. What is CrewAI?

A. CrewAI is a framework that allows multiple AI agents to work collaboratively on complex tasks. It enables task management, role specialization, and seamless coordination among agents.

Q2. How does CrewAI work?

A. CrewAI defines agents with specific roles, assigns tasks dynamically, and processes them either sequentially or concurrently. It leverages AI models like OpenAI’s GPT and Google Gemini to execute tasks efficiently.

Q3. How does CrewAI handle multiple queries simultaneously?

A. CrewAI uses Python’s asyncio.gather() to run multiple tasks concurrently, ensuring faster query resolution without performance bottlenecks.

Q4. Can CrewAI integrate with different LLMs?

A. Yes, CrewAI supports various large language models (LLMs), including OpenAI’s GPT-4, GPT-4o, and Google’s Gemini, allowing users to choose based on speed and accuracy requirements.

Q5. How does CrewAI ensure task accuracy?

A. By using different AI models with varied temperature settings, CrewAI balances creativity and factual correctness, ensuring reliable responses.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Hi! I am a keen Data Science student who loves to explore new things. My passion for data science stems from a deep curiosity about how data can be transformed into actionable insights. I enjoy diving into various datasets, uncovering patterns, and applying machine learning algorithms to solve real-world problems. Each project I undertake is an opportunity to enhance my skills and learn about new tools and techniques in the ever-evolving field of data science.

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details