In recent years, chatbots have become increasingly popular to provide customer service, answer questions, and engage with users. They can be used on websites, messaging platforms, and social media. Suppose we offer any service, and you want to build a chatbot service. First, we collect the user question data and then train the model (from scratch). However, recently, we have had powerful LLM models like Gemini Pro. In the beginning, most companies used the BERT Model. The capabilities of Gemini Pro have become increasingly prominent.
This tutorial will show you how to build a conversational Q&A chatbot using the Gemini Pro API. Gemini Pro is a cloud-based natural language processing (NLP) platform with various features for building conversational AI applications.
Learning Objective
This article was published as a part of the Data Science Blogathon.
Gemini is an AI technology developed by Google that appears to be focused on advancements in language understanding and processing. It is likely part of Google’s suite of machine learning and artificial intelligence tools, designed to handle complex tasks such as natural language understanding, language translation, content generation, and possibly more, depending on the specific capabilities of the Gemini system. This technology could be integrated into various Google products and services to enhance user experience with more intuitive and intelligent interactions.
Gemini has three variants: Gemini Ultra, Gemini, Pro, and Gemini Nano.
Gemini Pro is like Gemini’s big sibling, part of the T5 (Text-To-Text Transfer Transformer) family, just like its sibling. If Gemini is the clever language wizard, then Gemini Pro is the wizard with upgraded powers – the advanced version.
In simple terms, Gemini Pro is an advanced version of Gemini with enhanced capabilities and powers.
Feature | Gemini | Gemini Pro |
---|---|---|
Model size | Large | Extra large |
Training data | Massive dataset of text and code | Even larger dataset of text and code |
Performance | Good | Excellent |
Efficiency | Good | Excellent |
Applications | Text generation, machine translation, summarization, question answering | Text generation, machine translation, summarization, question answering, chatbot development |
The Gemini Pro API, provided by Google, empowers developers to integrate advanced language models into their applications. Leveraging state-of-the-art natural language processing capabilities, Gemini Pro enables us to create dynamic and context-aware chatbots that respond intelligently to user queries.
Let’s delve into the precise steps for crafting a sophisticated Conversational Q&A Chatbot using the Gemini Pro Free API.
GOOGLE_API_KEY=your_google_api_key
mkdir chatbot_project
cd chatbot_project
conda create -p ./venv python=3.11 -y
conda activate ./venv
touch requirements.txt
streamlit
google-generativeai
python-dotenv
And save the above packages.
pip install -r requirements.txt
from dotenv import load_dotenv
load_dotenv() ## loading all the environment variables
import streamlit as st
import os
import google.generativeai as genai
genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))
model = genai.GenerativeModel("gemini-pro")
chat = model.start_chat(history=[])
def get_gemini_response(question):
response = chat.send_message(question, stream=True)
return response
st.set_page_config(page_title="Q&A Demo")
st.header("Gemini LLM Application")
if 'chat_history' not in st.session_state:
st.session_state['chat_history'] = []
input = st.text_input("Input: ", key="input")
submit = st.button("Ask the question")
if submit and input:
response = get_gemini_response(input)
st.session_state['chat_history'].append(("You", input))
st.subheader("The Response is")
for chunk in response:
st.write(chunk.text)
st.session_state['chat_history'].append(("Bot", chunk.text))
st.subheader("The Chat History is")
for role, text in st.session_state['chat_history']:
st.write(f"{role}: {text}")
touch app.py
Paste the code below:
## loading all the environment variables
from dotenv import load_dotenv
load_dotenv()
import streamlit as st
import os
import google.generativeai as genai
genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))
## function to load Gemini Pro model and get repsonses
model=genai.GenerativeModel("gemini-pro")
chat = model.start_chat(history=[])
def get_gemini_response(question):
response=chat.send_message(question,stream=True)
return response
##initialize our streamlit app
st.set_page_config(page_title="Q&A Demo")
st.header("Gemini LLM Application")
# Initialize session state for chat history if it doesn't exist
if 'chat_history' not in st.session_state:
st.session_state['chat_history'] = []
input=st.text_input("Input: ",key="input")
submit=st.button("Ask the question")
if submit and input:
response=get_gemini_response(input)
# Add user query and response to session state chat history
st.session_state['chat_history'].append(("You", input))
st.subheader("The Response is")
for chunk in response:
st.write(chunk.text)
st.session_state['chat_history'].append(("Bot", chunk.text))
st.subheader("The Chat History is")
for role, text in st.session_state['chat_history']:
st.write(f"{role}: {text}")
This line imports the load_dotenv function from the dotenv module. This function loads environment variables from a file (in this case, the .env file).
streamlit run app.py
Fig: UI Of Chat Bot Conversation
Fig: UI Of Chat Bot Conversation(History)
This guide introduced you to the powerful Gemini Pro API for creating smart chatbots. We covered setting up your free API account, creating a chatbot to chat and answer questions, and organizing your project in a virtual environment. Now equipped with these skills, you can bring your chatbot ideas to life. Happy coding!
A. Gemini Pro API is a natural language processing (NLP) platform developed by Google AI. It empowers developers to build intelligent conversational AI applications by combining advanced NLP techniques with machine learning algorithms.
A. To get a free Gemini Pro API Key, follow the step-by-step guide in this tutorial. It includes instructions on generating the API key.
A. Gemini Pro API allows you to build smart chatbots capable of engaging in conversations and answering user questions. Its versatile nature enables applications in various domains, providing natural and intuitive interactions.
A. Creating a virtual environment is crucial for chatbot development as it ensures a clean and isolated space for your project. This helps manage dependencies, avoid conflicts, and maintain a structured development environment.
A. Yes, Gemini Pro API is designed to be efficient and can be used for real-time applications. Its scalability and performance capabilities make it well-suited for handling dynamic user interactions in real-time scenarios.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.