Planning a trip can be challenging these days. With so many choices for flights, hotels, and activities, travelers often find it difficult to pick the best options. Our Yatra Sevak.Ai chatbot is here to help. Imagine having a personal travel assistant at your fingertips someone who can book flights, find great hotels, recommend local attractions, and offer travel advice. Thanks to advanced AI, this is now possible.
This article shows how to build a smart Travel Assistant Chatbot using MistralAI, Langchain, Hugging Face, and Streamlit. The explanation covers how these technologies work together to create a chatbot that acts like a knowledgeable friend guiding you through your travel plans. Discover how AI can make travel planning easier and more enjoyable for everyone.
Learning Objective
Learn how to build a Comprehensive Travel Assistant Chatbot using HuggingFace, Langchain, and open-source models without relying on paid APIs.
Learn how to seamlessly integrate Hugging Face models into a Streamlit application for interactive user experiences.
Master the art of crafting effective prompts to optimize chatbot performance in travel planning and advisory roles.
Develop an AI-powered chatbot platform enabling seamless, anytime trip planning to save users time and money while providing transparent cost-saving insights.
How Travel Assistance can Revolutionize Travel Industry?
Weather-based Recommendations: AI chatbots suggest alternative plans in case of adverse weather conditions at the destination, allowing users to adjust their schedule promptly.
Gamification and Engagement: AI chatbots incorporate travel quizzes, loyalty rewards, and interactive guides to enhance the travel planning experience with enjoyable and engaging elements.
Crisis Management and Real-Time Updates: Chatbots offer immediate assistance during travel disruptions and provide timely updates, a capability that traditional services often struggle to deliver.
Multilingual Support and Cultural Sensitivity: Chatbots communicate in multiple languages and provide culturally relevant advice, catering effectively to international travelers better than traditional websites.
Instant Trip Adjustment : Users can instantly change their trip itinerary based on their requirements, facilitated by AI chatbots dynamic response capabilities.
Continuous Advisor Presence: Chatbots ensure an always-on advisory presence throughout the trip, offering guidance and support whenever needed.
What is HuggingFace ?
HuggingFace is an open-source platform for machine learning and natural language processing. It offers tools for creating, training, and deploying models, and hosts thousands of pre-trained models for tasks like computer vision, audio analysis, and text summarization. With over 30,000 datasets available, developers can train AI models and share their code within the community. Users can also showcase their projects through ML demo apps called Spaces, promoting collaboration and sharing in the AI community.
What is Langchain?
LangChain is an open source frameworkfor building applications based on large language models. It provides modular components for creating complex workflows, tools for efficient data handling, and supports integrating additional tools and libraries. Langchain makes it easy for developers to build, customize, and deploy LLM-powered applications.
For example, in a Yatra Sevak.Ai chatbot application, Langchain makes it easier to connect and use models from platforms like Hugging Face. By setting clear instructions and connecting different parts, developers can efficiently handle user questions about booking flights, hotels, rental cars, and providing travel tips. This makes the chatbot faster and more accurate, speeding up development by using pre-trained models effectively.
What is Mistral AI ?
Mistral AI is a cutting-edge platform specializing in large language models (LLMs) These models excel across multiple languages such as English, French, Italian, German, and Spanish, demonstrating robust capabilities in handling code. They offer high context windows, native function calling capacities, and JSON outputs, making them versatile and suitable for various application
Architectural Detail of Mistral-7B
Mistral-7B is a decoder-only Transformer with the following architectural choices:
Sliding Window Attention: Trained with 8k context length and fixed cache size, with a theoretical attention span of 128K tokens.
7B sparse Mixture-of-Experts, 12.9B active params (45B total)
22B sparse Mixture-of-Experts, 39B active params (141B total)
Cost-efficient reasoning, low-latency workloads
Top-tier reasoning, high-complexity tasks
State-of-the-art semantic, text re-presentation extraction
Workflow of Yatra Sevak.AI
User Interaction: The user interacts with the Streamlit frontend to input queries.
Chat Handling Logic:The application captures the user’s input, updates the session state, and adds the input to the chat history.
Response Generation (LangChain Integration):
The get_response function sets up the Hugging Face endpoint and uses LangChain tools to format and interpret the responses.
LangChain’s ChatPromptTemplate and StrOutputParser are used to format the the prompt and parse the output.
API Interaction: The application retrieves the API token from environment variables and interacts with Hugging Face’s API to generate text responses with the Mistral AI model.
Generate Response:The response is generated using the Hugging Face model invoked through LangChain.
Send Response Back: The generated response is appended to the chat history and displayed on the frontend.
Streamlit Frontend: The frontend is updated to show the AI’s response, completing the interaction cycle.
Steps to Build a Travel Assistant LLM Chatbot (Yatra Sevak.Ai)
Let us now build a travel assistant LLM Chatbot by following the steps given below.
Step1: Importing Required Libraries
Before diving into coding, ensure your environment is ready:
Create requirements.txt file and Install Required Libraries using command: pip install – requirements.txt
Create app.py file in your project directory & import necessary libraries.
import os
import streamlit as st
from dotenv import load_dotenv
from langchain_core.messages import AIMessage, HumanMessage
from langchain_community.llms import HuggingFaceEndpoint
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
Import os module, Provides a way to interact with the operating system, facilitating tasks like environment variable handling.
Streamlit is used to create interactive web applications for machine learning and data science.
load_dotenv Allows loading environment variables from a .env file, enhancing security by keeping sensitive information separate.
from langchain_core.messages import AIMessage, HumanMessage: These classes facilitate structured message handling within the chatbot application, ensuring clear communication between the AI and users.
from langchain_community.llms import HuggingFaceEndpoint: This class integrates with Hugging Face’s models and APIs within the LangChain framework.
from langchain_core.output_parsers import StrOutputParser: This component parses and processes textual output from the chatbot’s responses.
from langchain_core.prompts import ChatPromptTemplate: Defines templates or formats for prompting the AI model with user queries.
Step2: Setting Up Environment and API Token
Process of Accessing Hugging Face API:
Log in to your Hugging Face account.
Navigate to your account settings.
Generate API Token: If you haven’t already, generate an API token following above steps. This token is used to authenticate your application when interacting with Hugging Face’s APIs.
Set Up .env File: Create a .env file in your project directory to securely store sensitive information such as API tokens. Use a text editor to create and edit this file.
#After importing all libraries and setting up envirnoment. in app.py write these line.
load_dotenv() ## Load environment variables from .env file
load_dotenv() : Loads environment variables from a .env file located in the project directory.
Step3: Configuring Model and Task
# Define the repository ID and task
repo_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
task = "text-generation"
In this section, we define the model and task for our chatbot. The repo_id specifies the particular model we are using, in this case, “mistralai/Mixtral-8x7B-Instruct-v0.1”
You can customize this to different models that best fit the specific needs of your chatbot application.
Task Defines the specific task the chatbot performs with the model (text-generation for generating text responses).
prompt = ChatPromptTemplate.from_template(template)
# Function to get a response from the model
def get_response(user_query, chat_history):
# Initialize the Hugging Face Endpoint
llm = HuggingFaceEndpoint(
huggingfacehub_api_token=api_token,
repo_id=repo_id,
task=task
)
chain = prompt | llm | StrOutputParser()
response = chain.invoke({
"chat_history": chat_history,
"user_question": user_query,
})
return response
get_response function: It is the core of Yatra Sevak.AI’s response generation process.
Initialization: Yatra Sevak.AI connects to Hugging Face’s models using credentials (api_token) and specifies the model details (repo_id and task) for text generation.
Interaction Flow: Using LangChain’s tools (ChatPromptTemplate and StrOutputParser), it manages user queries (user_question) and keeps track of conversation history (chat_history).
Response Generation: By invoking the model , Yatra Sevak.AI processes user inputs to generate clear and helpful responses, improving interaction for travel-related queries.
Step7: Managing Chat History
# Initialize session state.
if "chat_history" not in st.session_state:
st.session_state.chat_history = [
AIMessage(content="Hello, I am Yatra Sevak.AI How can I help you?"),
]
# Display chat history.
for message in st.session_state.chat_history:
if isinstance(message, AIMessage):
with st.chat_message("AI"):
st.write(message.content)
elif isinstance(message, HumanMessage):
with st.chat_message("Human"):
st.write(message.content)
Initializes and manages the chat history within Streamlit’s session state, displaying AI and human messages in the user interface.
Step8: Handling User Input and Displaying Responses
# User input
user_query = st.chat_input("Type your message here...")
if user_query is not None and user_query != "":
st.session_state.chat_history.append(HumanMessage(content=user_query))
with st.chat_message("Human"):
st.markdown(user_query)
response = get_response(user_query, st.session_state.chat_history)
# Remove any unwanted prefixes from the response u should use these function but
#before using it I requestto[replace("bot response:", "").strip()] combine 1&2 to run without error.
#1.response = response.replace("AI response:", "").replace("chat response:", "").
#2.replace("bot response:", "").strip()
with st.chat_message("AI"):
st.write(response)
st.session_state.chat_history.append(AIMessage(content=response))
Travel assistance Chatbot application is ready !
Complete Code Repository
Explore Yatra Sevak.AI Application on GitHub here. Using this link, you can access the full code. Feel free to explore and utilize it as needed.
Steps to Deploy Travel Assistant Chatbot Application on Hugging Face Space
Step1: Navigate to Hugging Face Spaces Dashboard.
Step2: Create a New Space.
Step3: Configure Environment Variables
Click on Settings.
Click on New Secret options and Add name HUGGINGFACEHUB_API_TOKEN and your key value.
Step4: Upload Your Model Repository
Upload all the files in File section of Space.
Commit Changes to Deploy on HF_SPACE.
Step5: Travel Assistant Chatbot Application Deployed on HF_SPACE successfully!!.
Conclusion
In this article, we explored how to build a travel assistant chatbot(Yatra Sevak.AI) using HuggingFace, LangChain, and other advanced technologies. From setting up the environment and integrating Hugging Face models to defining prompts and deploying on Hugging Face Spaces, we covered all the essential steps. With Yatra Sevak.AI, you now have a powerful tool to enhance travel planning through AI-driven assistance.
Key Takeaways
Learn to build a powerful language model chatbot using Hugging Face endpoints without relying on costly APIs, empowering cost-effective AI integration.
Learn how to integrate Hugging Face endpoints to effortlessly incorporate their diverse range of pre-trained models into your applications.
Mastering the art of crafting effective prompts using templates empowers you to build versatile chatbot applications across different domains.
Q1. How does integrating Mistral AI’s models with LangChain benefit the performance of a travel assistant chatbot?
A. Integrating Mistral AI’s models with LangChain boosts the chatbot’s performance by utilizing advanced functionalities like extensive context windows and optimized attention mechanisms. This integration accelerates response times and enhances the accuracy of handling intricate travel inquiries, thereby elevating user satisfaction and interaction quality.
Q2.What role does LangChain play in developing a travel assistant chatbot?
A. LangChain provides a framework for building applications with large language models (LLMs). It offers tools like ChatPromptTemplate for crafting prompts and StrOutputParser for processing model outputs. LangChain simplifies the integration of Hugging Face models into your chatbot, enhancing its functionality and performance.
Q3.Why is it beneficial to deploy chatbots on Hugging Face Spaces?
A. Hugging Face Spaces provides a collaborative platform where developers can deploy, share, and iterate on chatbot applications, fostering innovation and community-driven improvements.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.
My name is Chirag Solanki, and I am pursuing my Bachelor's in Artificial Intelligence & Data Science from India 🎓. I am a Full Stack Data Science Enthusiast and passionate about Open Source 💻. I have the skills to build innovative projects in Machine Learning, Computer Vision, Natural Language Processing, and Power-BI. I enjoy creating blogs and articles on the Data Science domain. I believe in learning in public and love meeting awesome people from around the globe.
We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.
Show details
Powered By
Cookies
This site uses cookies to ensure that you get the best experience possible. To learn more about how we use cookies, please refer to our Privacy Policy & Cookies Policy.
brahmaid
It is needed for personalizing the website.
csrftoken
This cookie is used to prevent Cross-site request forgery (often abbreviated as CSRF) attacks of the website
Identityid
Preserves the login/logout state of users across the whole site.
sessionid
Preserves users' states across page requests.
g_state
Google One-Tap login adds this g_state cookie to set the user status on how they interact with the One-Tap modal.
MUID
Used by Microsoft Clarity, to store and track visits across websites.
_clck
Used by Microsoft Clarity, Persists the Clarity User ID and preferences, unique to that site, on the browser. This ensures that behavior in subsequent visits to the same site will be attributed to the same user ID.
_clsk
Used by Microsoft Clarity, Connects multiple page views by a user into a single Clarity session recording.
SRM_I
Collects user data is specifically adapted to the user or device. The user can also be followed outside of the loaded website, creating a picture of the visitor's behavior.
SM
Use to measure the use of the website for internal analytics
CLID
The cookie is set by embedded Microsoft Clarity scripts. The purpose of this cookie is for heatmap and session recording.
SRM_B
Collected user data is specifically adapted to the user or device. The user can also be followed outside of the loaded website, creating a picture of the visitor's behavior.
_gid
This cookie is installed by Google Analytics. The cookie is used to store information of how visitors use a website and helps in creating an analytics report of how the website is doing. The data collected includes the number of visitors, the source where they have come from, and the pages visited in an anonymous form.
_ga_#
Used by Google Analytics, to store and count pageviews.
_gat_#
Used by Google Analytics to collect data on the number of times a user has visited the website as well as dates for the first and most recent visit.
collect
Used to send data to Google Analytics about the visitor's device and behavior. Tracks the visitor across devices and marketing channels.
AEC
cookies ensure that requests within a browsing session are made by the user, and not by other sites.
G_ENABLED_IDPS
use the cookie when customers want to make a referral from their gmail contacts; it helps auth the gmail account.
test_cookie
This cookie is set by DoubleClick (which is owned by Google) to determine if the website visitor's browser supports cookies.
_we_us
this is used to send push notification using webengage.
WebKlipperAuth
used by webenage to track auth of webenagage.
ln_or
Linkedin sets this cookie to registers statistical data on users' behavior on the website for internal analytics.
JSESSIONID
Use to maintain an anonymous user session by the server.
li_rm
Used as part of the LinkedIn Remember Me feature and is set when a user clicks Remember Me on the device to make it easier for him or her to sign in to that device.
AnalyticsSyncHistory
Used to store information about the time a sync with the lms_analytics cookie took place for users in the Designated Countries.
lms_analytics
Used to store information about the time a sync with the AnalyticsSyncHistory cookie took place for users in the Designated Countries.
liap
Cookie used for Sign-in with Linkedin and/or to allow for the Linkedin follow feature.
visit
allow for the Linkedin follow feature.
li_at
often used to identify you, including your name, interests, and previous activity.
s_plt
Tracks the time that the previous page took to load
lang
Used to remember a user's language setting to ensure LinkedIn.com displays in the language selected by the user in their settings
s_tp
Tracks percent of page viewed
AMCV_14215E3D5995C57C0A495C55%40AdobeOrg
Indicates the start of a session for Adobe Experience Cloud
s_pltp
Provides page name value (URL) for use by Adobe Analytics
s_tslv
Used to retain and fetch time since last visit in Adobe Analytics
li_theme
Remembers a user's display preference/theme setting
li_theme_set
Remembers which users have updated their display / theme preferences
We do not use cookies of this type.
_gcl_au
Used by Google Adsense, to store and track conversions.
SID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
SAPISID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
__Secure-#
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
APISID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
SSID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
HSID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
DV
These cookies are used for the purpose of targeted advertising.
NID
These cookies are used for the purpose of targeted advertising.
1P_JAR
These cookies are used to gather website statistics, and track conversion rates.
OTZ
Aggregate analysis of website visitors
_fbp
This cookie is set by Facebook to deliver advertisements when they are on Facebook or a digital platform powered by Facebook advertising after visiting this website.
fr
Contains a unique browser and user ID, used for targeted advertising.
bscookie
Used by LinkedIn to track the use of embedded services.
lidc
Used by LinkedIn for tracking the use of embedded services.
bcookie
Used by LinkedIn to track the use of embedded services.
aam_uuid
Use these cookies to assign a unique ID when users visit a website.
UserMatchHistory
These cookies are set by LinkedIn for advertising purposes, including: tracking visitors so that more relevant ads can be presented, allowing users to use the 'Apply with LinkedIn' or the 'Sign-in with LinkedIn' functions, collecting information about how visitors use the site, etc.
li_sugr
Used to make a probabilistic match of a user's identity outside the Designated Countries
MR
Used to collect information for analytics purposes.
ANONCHK
Used to store session ID for a users session to ensure that clicks from adverts on the Bing search engine are verified for reporting purposes and for personalisation
We do not use cookies of this type.
Cookie declaration last updated on 24/03/2023 by Analytics Vidhya.
Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses different types of cookies. Some cookies are placed by third-party services that appear on our pages. Learn more about who we are, how you can contact us, and how we process personal data in our Privacy Policy.