Building a Conversational Q&A Chatbot With A Gemini Pro Free API

suyodhanj6 Last Updated : 28 Feb, 2024
8 min read

Introduction

In recent years, chatbots have become increasingly popular to provide customer service, answer questions, and engage with users. They can be used on websites, messaging platforms, and social media. Suppose we offer any service, and you want to build a chatbot service. First, we collect the user question data and then train the model (from scratch). However, recently, we have had powerful LLM models like Gemini Pro. In the beginning, most companies used the BERT Model. The capabilities of Gemini Pro have become increasingly prominent.

This tutorial will show you how to build a conversational Q&A chatbot using the Gemini Pro API. Gemini Pro is a cloud-based natural language processing (NLP) platform with various features for building conversational AI applications.

Q&A chatabot

Learning Objective

  • Gain a comprehensive understanding of the Gemini Pro API by exploring its features and capabilities for constructing intelligent conversational AI applications.
  • Follow a complete guide to create a free Gemini Pro API account, ensuring access to the platform’s functionalities.
  • Learn how to develop a chatbot with the ability to engage in conversations and provide answers to questions.
  • Access step-by-step instructions for creating a virtual environment for your project, ensuring a clean and isolated development environment.

This article was published as a part of the Data Science Blogathon.

Prerequisites

  • Obtain a Gemini Pro API key by creating a free account on the Gemini Pro platform. The API key is essential for authenticating your requests and accessing the features of the Gemini Pro API.
  • Python 3.10 or Later
  • A text editor (vs. code) or Pycharm
  • Conda or Miniconda

What is Gemini?

Gemini is an AI technology developed by Google that appears to be focused on advancements in language understanding and processing. It is likely part of Google’s suite of machine learning and artificial intelligence tools, designed to handle complex tasks such as natural language understanding, language translation, content generation, and possibly more, depending on the specific capabilities of the Gemini system. This technology could be integrated into various Google products and services to enhance user experience with more intuitive and intelligent interactions.

  • It was introduced on December 13, 2023, as part of the T5 (Text-To-Text Transfer Transformer) model family.
  • Gemini is our most flexible model yet — able to run efficiently on everything from data centers to mobile devices.
  • Its state-of-the-art capabilities will significantly enhance how developers and enterprise customers build and scale with AI.
  • Gemini has been used to develop various products and services, including Google Translate, Google Search, and Gmail.

Gemini has three variants: Gemini Ultra, Gemini, Pro, and Gemini Nano.

What is Gemini Pro?

Gemini Pro is like Gemini’s big sibling, part of the T5 (Text-To-Text Transfer Transformer) family, just like its sibling. If Gemini is the clever language wizard, then Gemini Pro is the wizard with upgraded powers – the advanced version.

In simple terms, Gemini Pro is an advanced version of Gemini with enhanced capabilities and powers.

Comparison of Gemini & Gemini Pro

Feature Gemini Gemini Pro
Model size Large Extra large
Training data Massive dataset of text and code Even larger dataset of text and code
Performance Good Excellent
Efficiency Good Excellent
Applications Text generation, machine translation, summarization, question answering Text generation, machine translation, summarization, question answering, chatbot development

Why Should One Use Gemini Pro API?

The Gemini Pro API, provided by Google, empowers developers to integrate advanced language models into their applications. Leveraging state-of-the-art natural language processing capabilities, Gemini Pro enables us to create dynamic and context-aware chatbots that respond intelligently to user queries.

Steps to Build a Conversational Q&A Chatbot With A Gemini Pro Free API

Let’s delve into the precise steps for crafting a sophisticated Conversational Q&A Chatbot using the Gemini Pro Free API.

Step 1: Set Up Gemini Pro API

Building a Conversational Q&A Chatbot With A Gemini Pro Free API
Source: Google Studio AI
  • Click Get API Key
gemini pro
Source: Google Studio AI
  • Create the API key.
gemini pro
  • Copy the key

Step 2: Creating .env File

  • Create the File: Open a text editor and create a new file. Save it with the name .env. Ensure no file extension (like .txt) and the file starts with a dot.
  • Add Environment Variables: In the .env file, you can define your environment variables. For example:
GOOGLE_API_KEY=your_google_api_key

Step 3: Initialize the Venv

  • First Create Directory
  • open the terminal and paste it
mkdir chatbot_project

cd chatbot_project
  • Create ./venv (Vrtial environment)
conda create -p ./venv python=3.11 -y
  • Activate the environment
conda activate ./venv

Step 4: Crafting requirements.txt

  • Create a requirements.txt file
touch requirements.txt
streamlit
google-generativeai
python-dotenv

And save the above packages.

  • Installing the packages
pip install -r requirements.txt

Step 5: Write the Chatbot Code

  • Load Environment Variables: This part of the code uses the dotenv library to load environment variables from a file named .env in the current directory. This is a common practice to keep sensitive information, such as API keys, separate from the code.
from dotenv import load_dotenv
load_dotenv() ## loading all the environment variables
  • Import Libraries: Here, you import the necessary libraries. streamlit is used for creating interactive web applications, os is a standard library for interacting with the operating system, and Google.generativeai is the module providing the Gemini AI functionality.
import streamlit as st
import os
import google.generativeai as genai
  • Configure Gemini AI: This line configures the Gemini AI by setting its API key. The API key is retrieved from the environment variables using os.getenv.
genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))
  • Initialize Gemini Pro Model: You initialize the Gemini Pro model for generating responses. It seems like you’re creating a chatbot using the Gemini Pro model.
model = genai.GenerativeModel("gemini-pro")
chat = model.start_chat(history=[])
  • Define Function for Getting Responses: This function sends a user’s question to the Gemini Pro model and retrieves the response. It looks like the responses are obtained in a streaming fashion.
def get_gemini_response(question):
    response = chat.send_message(question, stream=True)
    return response
  • Initialize Streamlit App: These lines initialize a Streamlit app, setting the page title and displaying a header.
st.set_page_config(page_title="Q&A Demo")
st.header("Gemini LLM Application")
  • Initialize Session State for Chat History: This checks if the chat history exists in the Streamlit session state. If not, it initializes an empty list to store the chat history.
if 'chat_history' not in st.session_state:
    st.session_state['chat_history'] = []
  • Take User Input and Display Responses: Here, you create a text input field for the user to input a question. When the user clicks the “Ask the question” button, it triggers the get_gemini_response function, displaying the responses and updating the chat history.
input = st.text_input("Input: ", key="input")
submit = st.button("Ask the question")

if submit and input:
    response = get_gemini_response(input)
    st.session_state['chat_history'].append(("You", input))
    st.subheader("The Response is")
    for chunk in response:
        st.write(chunk.text)
        st.session_state['chat_history'].append(("Bot", chunk.text))
  • Display Chat History: Finally, the code displays the chat history, showing the interactions between the user (“You”) and the chatbot (“Bot”).
st.subheader("The Chat History is")
for role, text in st.session_state['chat_history']:
    st.write(f"{role}: {text}")

Create app.py: create app.y file 

touch app.py

Paste the code below:

## loading all the environment variables
from dotenv import load_dotenv
load_dotenv() 

import streamlit as st
import os
import google.generativeai as genai

genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))

## function to load Gemini Pro model and get repsonses
model=genai.GenerativeModel("gemini-pro") 
chat = model.start_chat(history=[])
def get_gemini_response(question):
    
    response=chat.send_message(question,stream=True)
    return response

##initialize our streamlit app

st.set_page_config(page_title="Q&A Demo")

st.header("Gemini LLM Application")

# Initialize session state for chat history if it doesn't exist
if 'chat_history' not in st.session_state:
    st.session_state['chat_history'] = []

input=st.text_input("Input: ",key="input")
submit=st.button("Ask the question")

if submit and input:
    response=get_gemini_response(input)
    # Add user query and response to session state chat history
    st.session_state['chat_history'].append(("You", input))
    st.subheader("The Response is")
    for chunk in response:
        st.write(chunk.text)
        st.session_state['chat_history'].append(("Bot", chunk.text))
st.subheader("The Chat History is")
    
for role, text in st.session_state['chat_history']:
    st.write(f"{role}: {text}")

This line imports the load_dotenv function from the dotenv module. This function loads environment variables from a file (in this case, the .env file).

Step 6: Run the Chat Bot Application

  • Run the Streamlit application using the following command:
streamlit run app.py
"is gemini api free

                                                                     Fig: UI Of Chat Bot Conversation

"is gemini api free
is gemini api free

                                                        Fig: UI Of Chat Bot Conversation(History)

Advantages of Building a Conversational Q&A Chatbot

  • 24/7 Availability: Chatbots provide round-the-clock assistance, ensuring users can get answers to their queries at any time.
  • Efficient Problem Resolution: Q&A chatbots quickly address common user queries, leading to faster problem resolution and improved user experience.
  • Cost Savings: Implementation of chatbots reduces the need for human intervention in routine tasks, resulting in significant cost savings.
  • Data Collection and Analysis: Chatbots collect valuable user data, offering insights for enhancing products or services and understanding user behavior.
  • Adaptability and Continuous Improvement: Chatbots can be trained to adapt to evolving user needs and continuously improve their responses.

Conclusion

This guide introduced you to the powerful Gemini Pro API for creating smart chatbots. We covered setting up your free API account, creating a chatbot to chat and answer questions, and organizing your project in a virtual environment. Now equipped with these skills, you can bring your chatbot ideas to life. Happy coding!

 Key Takeaways

  1. Gain a deep understanding of Gemini Pro API, a robust tool for constructing intelligent conversational AI applications.
  2. Follow a comprehensive guide to create a free Gemini Pro API Key.
  3. Learn the process of creating a chatbot that engages in conversations and effectively answers user questions, showcasing the capabilities of Gemini Pro API.
  4. Understand the importance of creating a virtual environment for your project, ensuring a clean and structured development space for building innovative chatbot applications.

Frequently Asked Questions

Q1. What is Gemini Pro API?

A. Gemini Pro API is a natural language processing (NLP) platform developed by Google AI. It empowers developers to build intelligent conversational AI applications by combining advanced NLP techniques with machine learning algorithms.

Q2. How Can I Get a Free Gemini Pro API Key?

A. To get a free Gemini Pro API Key, follow the step-by-step guide in this tutorial. It includes instructions on generating the API key.

Q3. What Can I Build with Gemini Pro API?

A. Gemini Pro API allows you to build smart chatbots capable of engaging in conversations and answering user questions. Its versatile nature enables applications in various domains, providing natural and intuitive interactions.

Q4. Why is a Virtual Environment Important for Chatbot Development?

A. Creating a virtual environment is crucial for chatbot development as it ensures a clean and isolated space for your project. This helps manage dependencies, avoid conflicts, and maintain a structured development environment.

Q5. Can Gemini Pro API be Used for Real-time Applications?

A. Yes, Gemini Pro API is designed to be efficient and can be used for real-time applications. Its scalability and performance capabilities make it well-suited for handling dynamic user interactions in real-time scenarios.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

As a Data Scientist, I leverage my expertise in statistical analysis, machine learning, and data visualization to derive insights and make informed decisions. I have experience working with various programming languages, databases, and machine learning frameworks, enabling me to tackle complex data problems and deliver actionable results. I am a collaborative problem-solver who can work with stakeholders to deliver scalable and secure data solutions.

Responses From Readers

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details