Planning a trip can be challenging these days. With so many choices for flights, hotels, and activities, travelers often find it difficult to pick the best options. Our Yatra Sevak.Ai chatbot is here to help. Imagine having a personal travel assistant at your fingertips someone who can book flights, find great hotels, recommend local attractions, and offer travel advice. Thanks to advanced AI, this is now possible.
This article shows how to build a smart Travel Assistant Chatbot using MistralAI, Langchain, Hugging Face, and Streamlit. The explanation covers how these technologies work together to create a chatbot that acts like a knowledgeable friend guiding you through your travel plans. Discover how AI can make travel planning easier and more enjoyable for everyone.
This article was published as a part of the Data Science Blogathon.
HuggingFace is an open-source platform for machine learning and natural language processing. It offers tools for creating, training, and deploying models, and hosts thousands of pre-trained models for tasks like computer vision, audio analysis, and text summarization. With over 30,000 datasets available, developers can train AI models and share their code within the community. Users can also showcase their projects through ML demo apps called Spaces, promoting collaboration and sharing in the AI community.
LangChain is an open source framework for building applications based on large language models. It provides modular components for creating complex workflows, tools for efficient data handling, and supports integrating additional tools and libraries. Langchain makes it easy for developers to build, customize, and deploy LLM-powered applications.
For example, in a Yatra Sevak.Ai chatbot application, Langchain makes it easier to connect and use models from platforms like Hugging Face. By setting clear instructions and connecting different parts, developers can efficiently handle user questions about booking flights, hotels, rental cars, and providing travel tips. This makes the chatbot faster and more accurate, speeding up development by using pre-trained models effectively.
Mistral AI is a cutting-edge platform specializing in large language models (LLMs) These models excel across multiple languages such as English, French, Italian, German, and Spanish, demonstrating robust capabilities in handling code. They offer high context windows, native function calling capacities, and JSON outputs, making them versatile and suitable for various application
Mistral-7B is a decoder-only Transformer with the following architectural choices:
Mistral 7 B (open source) | Mistral 8x7B (open source) | Mistral 8x22B (open source) | Mistral small (optimized Model) | Mistral large (optimized Model) | MistralEmbed (optimized Model) |
---|---|---|---|---|---|
7B transformer, fast-deployed, easily customizable | 7B sparse Mixture-of-Experts, 12.9B active params (45B total) | 22B sparse Mixture-of-Experts, 39B active params (141B total) | Cost-efficient reasoning, low-latency workloads | Top-tier reasoning, high-complexity tasks | State-of-the-art semantic, text re-presentation extraction |
Let us now build a travel assistant LLM Chatbot by following the steps given below.
Before diving into coding, ensure your environment is ready:
streamlit
python-dotenv
langchain-core
langchain-community
huggingface-hub
import os
import streamlit as st
from dotenv import load_dotenv
from langchain_core.messages import AIMessage, HumanMessage
from langchain_community.llms import HuggingFaceEndpoint
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
#After importing all libraries and setting up envirnoment. in app.py write these line.
load_dotenv() ## Load environment variables from .env file
# Define the repository ID and task
repo_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
task = "text-generation"
# App config
st.set_page_config(page_title="Yatra Sevak.AI",page_icon= "🌍")
st.title("Yatra Sevak.AI ✈️")
prompt = ChatPromptTemplate.from_template(template)
# Function to get a response from the model
def get_response(user_query, chat_history):
# Initialize the Hugging Face Endpoint
llm = HuggingFaceEndpoint(
huggingfacehub_api_token=api_token,
repo_id=repo_id,
task=task
)
chain = prompt | llm | StrOutputParser()
response = chain.invoke({
"chat_history": chat_history,
"user_question": user_query,
})
return response
# Initialize session state.
if "chat_history" not in st.session_state:
st.session_state.chat_history = [
AIMessage(content="Hello, I am Yatra Sevak.AI How can I help you?"),
]
# Display chat history.
for message in st.session_state.chat_history:
if isinstance(message, AIMessage):
with st.chat_message("AI"):
st.write(message.content)
elif isinstance(message, HumanMessage):
with st.chat_message("Human"):
st.write(message.content)
# User input
user_query = st.chat_input("Type your message here...")
if user_query is not None and user_query != "":
st.session_state.chat_history.append(HumanMessage(content=user_query))
with st.chat_message("Human"):
st.markdown(user_query)
response = get_response(user_query, st.session_state.chat_history)
# Remove any unwanted prefixes from the response u should use these function but
#before using it I requestto[replace("bot response:", "").strip()] combine 1&2 to run without error.
#1.response = response.replace("AI response:", "").replace("chat response:", "").
#2.replace("bot response:", "").strip()
with st.chat_message("AI"):
st.write(response)
st.session_state.chat_history.append(AIMessage(content=response))
Travel assistance Chatbot application is ready !
Explore Yatra Sevak.AI Application on GitHub here. Using this link, you can access the full code. Feel free to explore and utilize it as needed.
In this article, we explored how to build a travel assistant chatbot(Yatra Sevak.AI) using HuggingFace, LangChain, and other advanced technologies. From setting up the environment and integrating Hugging Face models to defining prompts and deploying on Hugging Face Spaces, we covered all the essential steps. With Yatra Sevak.AI, you now have a powerful tool to enhance travel planning through AI-driven assistance.
A. Integrating Mistral AI’s models with LangChain boosts the chatbot’s performance by utilizing advanced functionalities like extensive context windows and optimized attention mechanisms. This integration accelerates response times and enhances the accuracy of handling intricate travel inquiries, thereby elevating user satisfaction and interaction quality.
A. LangChain provides a framework for building applications with large language models (LLMs). It offers tools like ChatPromptTemplate for crafting prompts and StrOutputParser for processing model outputs. LangChain simplifies the integration of Hugging Face models into your chatbot, enhancing its functionality and performance.
A. Hugging Face Spaces provides a collaborative platform where developers can deploy, share, and iterate on chatbot applications, fostering innovation and community-driven improvements.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.
Lorem ipsum dolor sit amet, consectetur adipiscing elit,