In the era of AI, chatbots have revolutionized how we interact with technology. Perhaps one of the most impactful uses is in the healthcare industry. Chatbots are able to deliver fast, accurate information, and help individuals more effectively manage their health. In this article, we’ll learn how to develop a medical chatbot using Gemini 2.0, Flask, HTML, and Bootstrap. The project is about creating a personalized, user-friendly custom platform to answer health-related queries with accuracy and speed.
Announced in December 2024, Gemini 2.0 is the latest iteration of Google’s large language model (LLM) series, developed by Google DeepMind. It introduces several key enhancements, including multimodal output, native tool use, and agentic abilities, positioning it as a versatile AI model for diverse applications.
Building on its predecessor, Gemini 1.5, Gemini 2.0 extends the capability to process and generate text, images, video, and audio. It adds native image creation and multilingual text-to-speech outputs for more natural, interactive user experiences.
One of the most outstanding features of Gemini 2.0 is its agentic AI, which allows the system to plan and execute tasks independently. Experimental projects like Project Astra demonstrate this capability by integrating with Google services such as Search and Maps to provide real-time, contextual assistance. Another example is Project Mariner, a Chrome extension that navigates the web autonomously to perform tasks such as online shopping.
Gemini 2.0 is available in several versions, each tailored for specific use cases:
Meta (formerly Facebook) developed FAISS as an open-source library for efficient similarity search and clustering of dense vectors. Machine learning commonly uses FAISS, especially for tasks involving large-scale vector search and nearest neighbor retrieval. FAISS optimizes handling high-dimensional data, making it ideal for applications such as recommendation systems, natural language processing, and image retrieval.
In a nutshell, FAISS enables indexing dense vectors and supports fast approximate or exact search over them. It uses product quantization, HNSW (Hierarchical Navigable Small World graphs), and IVF (Inverted File Index) techniques to balance the trade-off between speed and accuracy. These techniques dramatically reduce the computational complexity and memory usage with high precision in the search results. However, FAISS further supports both CPU and GPU acceleration, making it suitable for millions or even billions of vectors for handling datasets.
One of FAISS’s key strengths is its versatility. It provides multiple indexing strategies, enabling users to choose the most appropriate approach for their specific use cases. For example, flat indexes offer exact search capabilities, while quantization-based indexes prioritize efficiency. Its Python and C++ APIs make it accessible to a wide range of developers, and its modular design allows for easy integration into existing machine learning pipelines.
Learn more about Vector Database here.
Below is the flow diagram:
This workflow ensures smooth user interaction, efficient error handling, and accurate response generation using the Gemini Model for a seamless medical chatbot experience.
Begin by installing the required dependencies, configuring the API key, and setting up the frontend to prepare your environment for the medical chatbot.
Install the requirements.txt
pip install -r https://raw.githubusercontent.com/Gouravlohar/Medical-Chatbot/refs/heads/master/requirements.txt
API Key
Get your Gemini 2.0 API key from here.
This HTML code forms the front-end user interface of a medical chatbot application. It creates an interactive web page where users can:
The interface uses Bootstrap for styling and jQuery for handling user interactions dynamically. It includes features like a typing indicator for the chatbot and seamless message display. The code integrates with a Flask back-end to process user inputs and return AI-generated responses.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="x-ua-compatible" content="ie=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="Demonstration of Gemini API in a Python Flask Application.">
<title>Medical Chatbot</title>
<link rel="shortcut icon" type="image/x-icon" href="{{ url_for('static', filename='images/iba_logo.png') }}">
<link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@300;400;500;600&display=swap" rel="stylesheet">
<style>
:root {
--primary-gradient: linear-gradient(135deg, #6e8efb, #4834d4);
--chat-bg: #111827;
--message-bg: #1f2937;
--user-message-bg: #3730a3;
--text-primary: #fff;
--text-secondary: #9ca3af;
}
body {
font-family: 'Inter', sans-serif;
background-color: var(--chat-bg);
color: var(--text-primary);
min-height: 100vh;
display: flex;
flex-direction: column;
}
.chat-container {
max-width: 1200px;
margin: 0 auto;
padding: 2rem;
flex: 1;
display: flex;
flex-direction: column;
}
.title {
text-align: center;
margin-bottom: 1rem;
font-size: 2rem;
font-weight: 600;
color: var(--text-primary);
}
.warning {
text-align: center;
margin-bottom: 2rem;
font-size: 1rem;
color: var(--text-secondary);
}
.messages-container {
flex: 1;
overflow-y: auto;
padding: 1rem;
scroll-behavior: smooth;
}
.message {
margin-bottom: 1rem;
opacity: 0;
transform: translateY(20px);
animation: fadeIn 0.3s ease forwards;
}
.message-content {
padding: 1rem;
border-radius: 1rem;
max-width: 80%;
}
.user-message .message-content {
background: var(--user-message-bg);
margin-left: auto;
}
.bot-message .message-content {
background: var(--message-bg);
}
.input-container {
padding: 1rem;
background: var(--chat-bg);
border-top: 1px solid rgba(255, 255, 255, 0.1);
}
.chat-input {
background: var(--message-bg);
border: none;
border-radius: 1.5rem;
padding: 1rem 1.5rem;
color: var(--text-primary);
width: calc(100% - 120px);
}
.send-button {
background: var(--primary-gradient);
border: none;
border-radius: 1.5rem;
padding: 1rem 2rem;
color: white;
font-weight: 600;
transition: all 0.3s ease;
}
.send-button:hover {
transform: translateY(-2px);
box-shadow: 0 5px 15px rgba(110, 142, 251, 0.4);
}
.typing-indicator {
display: flex;
gap: 0.5rem;
padding: 1rem;
background: var(--message-bg);
border-radius: 1rem;
width: fit-content;
}
.typing-dot {
width: 8px;
height: 8px;
background: var(--text-secondary);
border-radius: 50%;
animation: typing 1.4s infinite ease-in-out;
}
.typing-dot:nth-child(2) {
animation-delay: 0.2s;
}
.typing-dot:nth-child(3) {
animation-delay: 0.4s;
}
@keyframes typing {
0%,
100% {
transform: translateY(0);
}
50% {
transform: translateY(-10px);
}
}
@keyframes fadeIn {
to {
opacity: 1;
transform: translateY(0);
}
}
/* Message Formatting */
.bot-message strong {
color: #818cf8;
font-weight: 600;
}
.bot-message ul {
padding-left: 1.5rem;
margin: 0.5rem 0;
}
</style>
</head>
<body>
<div class="chat-container">
<div class="title">Welcome to Medical Chatbot</div>
<div class="warning">Note: This is an AI chatbot and may make mistakes. Please verify the information provided.</div>
{% with messages = get_flashed_messages() %}
{% if messages %}
<div class="alert alert-info" role="alert">
{{ messages[0] }}
</div>
{% endif %}
{% endwith %}
<form id="upload-form" method="post" enctype="multipart/form-data" action="/upload">
<div class="mb-3">
<label for="pdf_files" class="form-label">Upload PDF files</label>
<input class="form-control" type="file" id="pdf_files" name="pdf_files" multiple>
</div>
<button type="submit" class="btn btn-primary">Upload PDFs</button>
</form>
<div class="messages-container" id="messages-container">
<!-- Messages will be appended here -->
</div>
<form id="chat-form" method="post">
<div class="input-container">
<input type="text" class="chat-input" id="chat-input" name="prompt" placeholder="Type your message...">
<button type="submit" class="send-button" id="send-button">Send</button>
</div>
</form>
</div>
<script src="https://code.jquery.com/jquery-3.6.3.min.js"></script>
<script>
$(document).ready(function () {
$("#chat-form").submit(function (event) {
event.preventDefault();
var question = $("#chat-input").val();
if (question.trim() === "") return;
let userMessage = `
<div class="message user-message">
<div class="message-content">
${question}
</div>
</div>`;
$("#messages-container").append(userMessage);
$("#chat-input").val("");
let typingIndicator = `
<div class="message bot-message typing-indicator">
<div class="typing-dot"></div>
<div class="typing-dot"></div>
<div class="typing-dot"></div>
</div>`;
$("#messages-container").append(typingIndicator);
$.ajax({
type: "POST",
url: "/ask",
data: {
'prompt': question
},
success: function (data) {
$(".typing-indicator").remove();
let cleanedData = data
.replace(/\*\*(.*?)\*\*/g, "<strong>$1</strong>")
.replace(/\n/g, "<br>")
.replace(/- (.*?)(?=\n|$)/g, "<li>$1</li>");
let botMessage = `
<div class="message bot-message">
<div class="message-content">
${cleanedData}
</div>
</div>`;
$("#messages-container").append(botMessage);
}
});
});
});
</script>
</body>
</html>
Imagine being able to upload a few PDFs and instantly ask questions about their content, receiving precise, AI-generated answers in seconds. This is the promise of a document-powered AI question-answering system. By combining the power of AI models like Gemini, document embedding techniques, and a Flask-based web interface, you can create an intelligent tool capable of understanding, processing, and responding to user queries based on uploaded documents. Below we’ll walk you through the steps to build such a system, from setting up the environment to implementing advanced features like similarity search and real-time responses.
Begin by importing necessary libraries and modules, such as Flask for the web application, Google Generative AI for model integration, and LangChain for document handling and vector store management.
from flask import Flask, render_template, request, redirect, url_for, flash
import google.generativeai as genai
from langchain_core.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain_community.document_loaders import PyPDFLoader
from langchain_community.vectorstores import FAISS
from langchain_huggingface import HuggingFaceEmbeddings
import os
import logging
import pickle
Set up the Flask app, configure key settings like the upload folder for PDFs, and define a secret key for session management.
app = Flask(__name__)
app.config['UPLOAD_FOLDER'] = 'uploads'
app.secret_key = 'supersecretkey'
os.makedirs(app.config['UPLOAD_FOLDER'], exist_ok=T
Configure logging to capture important information and errors, ensuring smooth debugging and monitoring during the app’s operation.
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
Initialize the Gemini AI model and configure it with your API key to enable interaction with the model for generating content.
model = genai.GenerativeModel('gemini-2.0-flash-exp')
my_api_key_gemini = os.getenv('GOOGLE_API_KEY')
genai.configure(api_key=my_api_key_gemini)
Set up a vector store to store document embeddings, loading it if already exists, to perform efficient document similarity searches later.
vector_store = None
# Load existing vector store if available
if os.path.exists('vector_store.pkl'):
with open('vector_store.pkl', 'rb') as f:
vector_store = pickle.load(f)
If it exists, the system loads the vector store from the file using pickle.load(f).
This stores document embeddings (numerical representations of the documents) for efficient similarity search.
Define a handler for 404 errors to redirect users to the home page when they access a non-existent route.
@app.errorhandler(404)
def page_not_found(e):
return redirect(url_for('index'))
If a user accesses a non-existent page (i.e., a 404 error occurs), the system redirects them to the home page (index).
Create the route for the main page of the web application, rendering the initial HTML template to the user.
@app.route('/')
def index():
return render_template('index.html')
Implement the file upload route, allowing users to upload PDF files, process them, and convert them into embeddings for the vector store.
@app.route('/upload', methods=['POST'])
def upload():
global vector_store
try:
if 'pdf_files' not in request.files:
flash("No file part")
return redirect(url_for('index'))
files = request.files.getlist('pdf_files')
documents = []
for file in files:
if file.filename == '':
flash("No selected file")
return redirect(url_for('index'))
file_path = os.path.join(app.config['UPLOAD_FOLDER'], file.filename)
file.save(file_path)
pdf_loader = PyPDFLoader(file_path)
documents.extend(pdf_loader.load())
# Create embeddings using HuggingFaceEmbeddings
embeddings = HuggingFaceEmbeddings()
if vector_store is None:
# Create a new vector store if it doesn't exist
vector_store = FAISS.from_documents(documents, embeddings)
else:
# Add new documents to the existing vector store
vector_store.add_documents(documents)
# Save the updated vector store
with open('vector_store.pkl', 'wb') as f:
pickle.dump(vector_store, f)
flash("PDFs uploaded and processed successfully. The knowledge base is ready.")
return redirect(url_for('index'))
except Exception as e:
logger.error("An error occurred while processing the PDFs: %s", e)
flash("An error occurred while processing the PDFs.")
return redirect(url_for('index'))
Implement the question-answering route, where users can input questions, and the app retrieves relevant documents and generates AI-powered responses based on the content.
@app.route('/ask', methods=['POST'])
def ask():
global vector_store
if vector_store is None:
return "Knowledge base is not ready. Please upload PDFs first."
question = request.form['prompt']
# Retrieve relevant documents based on the question
relevant_docs = vector_store.similarity_search(question)
context = " ".join([doc.page_content for doc in relevant_docs])
custom_prompt = f"You are the best doctor. Only provide medical-related answers. Context: {context} Question: {question}"
response = model.generate_content(custom_prompt)
if response.text:
return response.text
else:
return "Sorry, but I think Gemini didn't want to answer that!"
Finally, run the Flask app in debug mode to start the web application and make it accessible for users to interact with.
if __name__ == '__main__':
app.run(debug=True)
Get Code on GitHub here
PDF I used for testing link
Prompt
How many types of Headache?
After uploading the PDF, the system provides response directly from its content.
In this blog, we have discussed how to create a Flask-based web application that uses AI tools and techniques to build a knowledge base from uploaded PDFs. The application allows users to ask medical-related questions and get contextually relevant answers based on the content of the uploaded documents by integrating generative models like Google Gemini and vector search mechanisms with LangChain. In such a system, AI, allied with modern tools for web development, can completely automate information retrieval in an intelligent interactive experience.
By understanding the basic structure of this code, from file uploading to question answering, we see how a basic Flask app can be extended with powerful AI capabilities. Whether developing a knowledge management system or simply designing a chatbot, the same discussed technologies can come in handy and get you underway.
A. The /ask route allows users to submit questions. The app then uses the uploaded PDFs to find relevant information and generates a response using Google’s Gemini AI model.
A. The application uses PyPDFLoader to extract text from uploaded PDFs. This text is then embedded into vectors using HuggingFaceEmbeddings, and stored in a FAISS vector store for fast similarity searches.
A. Yes, you can adapt the app to various domains. By changing the prompt, you can customize the question-answering functionality to match different fields, such as legal, educational, or technical.
A. The vector store is saved as a .pkl file using Python’s pickle module. The app checks for the file’s existence on startup and loads it if available, ensuring that previously uploaded documents persist across sessions.
A. You need Python and Flask installed, along with dependencies like google.generativeai, langchain, FAISS, and HuggingFaceEmbeddings. You also need an API key for Google’s Gemini model. Make sure to set up a virtual environment to manage the dependencies.