Build a Travel Assistant Chatbot with HuggingFace, LangChain, and MistralAI

Chirag Solanki Last Updated : 01 Jul, 2024
9 min read

Introduction

Planning a trip can be challenging these days. With so many choices for flights, hotels, and activities, travelers often find it difficult to pick the best options. Our Yatra Sevak.Ai chatbot is here to help. Imagine having a personal travel assistant at your fingertips someone who can book flights, find great hotels, recommend local attractions, and offer travel advice. Thanks to advanced AI, this is now possible.

This article shows how to build a smart Travel Assistant Chatbot using MistralAI, Langchain, Hugging Face, and Streamlit. The explanation covers how these technologies work together to create a chatbot that acts like a knowledgeable friend guiding you through your travel plans. Discover how AI can make travel planning easier and more enjoyable for everyone.

Learning Objective

  • Learn how to build a Comprehensive Travel Assistant Chatbot using HuggingFace, Langchain, and open-source models without relying on paid APIs.
  • Learn how to seamlessly integrate Hugging Face models into a Streamlit application for interactive user experiences.
  • Master the art of crafting effective prompts to optimize chatbot performance in travel planning and advisory roles.
  • Develop an AI-powered chatbot platform enabling seamless, anytime trip planning to save users time and money while providing transparent cost-saving insights.

This article was published as a part of the Data Science Blogathon.

How Travel Assistance can Revolutionize Travel Industry?

  • Weather-based Recommendations: AI chatbots suggest alternative plans in case of adverse weather conditions at the destination, allowing users to adjust their schedule promptly.
  • Gamification and Engagement: AI chatbots incorporate travel quizzes, loyalty rewards, and interactive guides to enhance the travel planning experience with enjoyable and engaging elements.
  • Crisis Management and Real-Time Updates: Chatbots offer immediate assistance during travel disruptions and provide timely updates, a capability that traditional services often struggle to deliver.
  • Multilingual Support and Cultural Sensitivity: Chatbots communicate in multiple languages and provide culturally relevant advice, catering effectively to international travelers better than traditional websites.
  • Instant Trip Adjustment : Users can instantly change their trip itinerary based on their requirements, facilitated by AI chatbots dynamic response capabilities.
  • Continuous Advisor Presence: Chatbots ensure an always-on advisory presence throughout the trip, offering guidance and support whenever needed.

What is HuggingFace ?

HuggingFace is an open-source platform for machine learning and natural language processing. It offers tools for creating, training, and deploying models, and hosts thousands of pre-trained models for tasks like computer vision, audio analysis, and text summarization. With over 30,000 datasets available, developers can train AI models and share their code within the community. Users can also showcase their projects through ML demo apps called Spaces, promoting collaboration and sharing in the AI community.

What is Langchain?

LangChain is an open source framework for building applications based on large language models. It provides modular components for creating complex workflows, tools for efficient data handling, and supports integrating additional tools and libraries. Langchain makes it easy for developers to build, customize, and deploy LLM-powered applications.

What is Langchain ?

For example, in a Yatra Sevak.Ai chatbot application, Langchain makes it easier to connect and use models from platforms like Hugging Face. By setting clear instructions and connecting different parts, developers can efficiently handle user questions about booking flights, hotels, rental cars, and providing travel tips. This makes the chatbot faster and more accurate, speeding up development by using pre-trained models effectively.

What is Mistral AI ?

Mistral AI is a cutting-edge platform specializing in large language models (LLMs) These models excel across multiple languages such as English, French, Italian, German, and Spanish, demonstrating robust capabilities in handling code. They offer high context windows, native function calling capacities, and JSON outputs, making them versatile and suitable for various application

Architectural Detail of Mistral-7B

Mistral-7B is a decoder-only Transformer with the following architectural choices:

  • Sliding Window Attention: Trained with 8k context length and fixed cache size, with a theoretical attention span of 128K tokens.
  • GQA (Grouped Query Attention): allowing faster inference and lower cache size.
  • Byte-fallback BPE tokenizer: Ensures that characters are never mapped to out of vocabulary tokens.

Types of Mistral AI Model

Mistral 7 B (open source) Mistral 8x7B (open source) Mistral 8x22B (open source) Mistral small (optimized Model) Mistral large (optimized Model) MistralEmbed (optimized Model)
7B transformer, fast-deployed, easily customizable 7B sparse Mixture-of-Experts, 12.9B active params (45B total) 22B sparse Mixture-of-Experts, 39B active params (141B total) Cost-efficient reasoning, low-latency workloads Top-tier reasoning, high-complexity tasks State-of-the-art semantic, text re-presentation extraction

Workflow of Yatra Sevak.AI

Workflow of Yatra Sevak.AI
  • User Interaction: The user interacts with the Streamlit frontend to input queries.
  • Chat Handling Logic:The application captures the user’s input, updates the session state, and adds the input to the chat history.
  • Response Generation (LangChain Integration):
    • The get_response function sets up the Hugging Face endpoint and uses LangChain tools to format and interpret the responses.
    • LangChain’s ChatPromptTemplate and StrOutputParser are used to format the the prompt and parse the output.
  • API Interaction: The application retrieves the API token from environment variables and interacts with Hugging Face’s API to generate text responses with the Mistral AI model.
  • Generate Response:The response is generated using the Hugging Face model invoked through LangChain.
  • Send Response Back: The generated response is appended to the chat history and displayed on the frontend.
  • Streamlit Frontend: The frontend is updated to show the AI’s response, completing the interaction cycle.

Steps to Build a Travel Assistant LLM Chatbot (Yatra Sevak.Ai)

Let us now build a travel assistant LLM Chatbot by following the steps given below.

Step1: Importing Required Libraries

Before diving into coding, ensure your environment is ready:

  • Create requirements.txt file and Install Required Libraries using command: pip install – requirements.txt
streamlit
python-dotenv
langchain-core
langchain-community
huggingface-hub
  • Create app.py file in your project directory & import necessary libraries.
import os
import streamlit as st
from dotenv import load_dotenv
from langchain_core.messages import AIMessage, HumanMessage
from langchain_community.llms import HuggingFaceEndpoint
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
  • Import os module, Provides a way to interact with the operating system, facilitating tasks like environment variable handling.
  • Streamlit is used to create interactive web applications for machine learning and data science.
  • load_dotenv Allows loading environment variables from a .env file, enhancing security by keeping sensitive information separate.
  • from langchain_core.messages import AIMessage, HumanMessage: These classes facilitate structured message handling within the chatbot application, ensuring clear communication between the AI and users.
  • from langchain_community.llms import HuggingFaceEndpoint: This class integrates with Hugging Face’s models and APIs within the LangChain framework.
  • from langchain_core.output_parsers import StrOutputParser: This component parses and processes textual output from the chatbot’s responses.
  • from langchain_core.prompts import ChatPromptTemplate: Defines templates or formats for prompting the AI model with user queries.

Step2: Setting Up Environment and API Token

  • Process of Accessing Hugging Face API:
    • Log in to your Hugging Face account.
    • Navigate to your account settings.
Setting Up Environment and API Token
Travel Assistant Chatbot
Travel Assistant Chatbot
Travel Assistant Chatbot
  • Generate API Token: If you haven’t already, generate an API token following above steps. This token is used to authenticate your application when interacting with Hugging Face’s APIs.
  • Set Up .env File: Create a .env file in your project directory to securely store sensitive information such as API tokens. Use a text editor to create and edit this file.
Travel Assistant Chatbot
#After importing all libraries and setting up envirnoment. in app.py write these line.
load_dotenv()  ## Load environment variables from .env file
  • load_dotenv() : Loads environment variables from a .env file located in the project directory.

Step3: Configuring Model and Task

# Define the repository ID and task
repo_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
task = "text-generation"
  • In this section, we define the model and task for our chatbot. The repo_id specifies the particular model we are using, in this case, “mistralai/Mixtral-8x7B-Instruct-v0.1”
  • You can customize this to different models that best fit the specific needs of your chatbot application.
  • Task Defines the specific task the chatbot performs with the model (text-generation for generating text responses).

Step4: Streamlit Configuration

# App config
st.set_page_config(page_title="Yatra Sevak.AI",page_icon= "🌍")
st.title("Yatra Sevak.AI ✈️")

Step5: Defining Chatbot Template

  • For optimal results, utilize the prompt template available on my GitHub repository to create robust prompts for your travel assistant chatbot.
  • github link
Travel Assistant Chatbot

Step6: Implementing Response Handling

prompt = ChatPromptTemplate.from_template(template)

# Function to get a response from the model
def get_response(user_query, chat_history):
    # Initialize the Hugging Face Endpoint
    llm = HuggingFaceEndpoint(
        huggingfacehub_api_token=api_token,
        repo_id=repo_id,
        task=task
    )
    chain = prompt | llm | StrOutputParser()
    response = chain.invoke({
        "chat_history": chat_history,
        "user_question": user_query,
    })
    return response
  • get_response function: It is the core of Yatra Sevak.AI’s response generation process.
  • Initialization: Yatra Sevak.AI connects to Hugging Face’s models using credentials (api_token) and specifies the model details (repo_id and task) for text generation.
  • Interaction Flow: Using LangChain’s tools (ChatPromptTemplate and StrOutputParser), it manages user queries (user_question) and keeps track of conversation history (chat_history).
  • Response Generation: By invoking the model , Yatra Sevak.AI processes user inputs to generate clear and helpful responses, improving interaction for travel-related queries.

Step7: Managing Chat History

# Initialize session state.
if "chat_history" not in st.session_state:
    st.session_state.chat_history = [
        AIMessage(content="Hello, I am Yatra Sevak.AI How can I help you?"),
    ]
# Display chat history.
for message in st.session_state.chat_history:
    if isinstance(message, AIMessage):
        with st.chat_message("AI"):
            st.write(message.content)
    elif isinstance(message, HumanMessage):
        with st.chat_message("Human"):
            st.write(message.content)
  • Initializes and manages the chat history within Streamlit’s session state, displaying AI and human messages in the user interface.

Step8: Handling User Input and Displaying Responses

# User input
user_query = st.chat_input("Type your message here...")
if user_query is not None and user_query != "":
    st.session_state.chat_history.append(HumanMessage(content=user_query))

    with st.chat_message("Human"):
        st.markdown(user_query)

    response = get_response(user_query, st.session_state.chat_history)

    # Remove any unwanted prefixes from the response u should use these function but 
#before using it I requestto[replace("bot response:", "").strip()] combine 1&2 to run without error.

    #1.response = response.replace("AI response:", "").replace("chat response:", "").
    #2.replace("bot response:", "").strip()

    with st.chat_message("AI"):
        st.write(response)

    st.session_state.chat_history.append(AIMessage(content=response)) 

Travel assistance Chatbot application is ready !

Travel Chatbot

Complete Code Repository

Explore Yatra Sevak.AI Application on GitHub here. Using this link, you can access the full code. Feel free to explore and utilize it as needed.

Steps to Deploy Travel Assistant Chatbot Application on Hugging Face Space

  • Step1: Navigate to Hugging Face Spaces Dashboard.
  • Step2: Create a New Space.
Create new space
Step 2: Create a New Space.
  • Step3: Configure Environment Variables
    • Click on Settings.
    • Click on New Secret options and Add name HUGGINGFACEHUB_API_TOKEN and your key value.
Configure Environment Variables
new secret
  • Step4: Upload Your Model Repository
    • Upload all the files in File section of Space.
    • Commit Changes to Deploy on HF_SPACE.
Hugging face
  • Step5: Travel Assistant Chatbot Application Deployed on HF_SPACE successfully!!.
Travel Assistant Chatbot
Travel Assistant Chatbot

Conclusion 

In this article, we explored how to build a travel assistant chatbot(Yatra Sevak.AI) using HuggingFace, LangChain, and other advanced technologies. From setting up the environment and integrating Hugging Face models to defining prompts and deploying on Hugging Face Spaces, we covered all the essential steps. With Yatra Sevak.AI, you now have a powerful tool to enhance travel planning through AI-driven assistance.

Key Takeaways

  • Learn to build a powerful language model chatbot using Hugging Face endpoints without relying on costly APIs, empowering cost-effective AI integration.
  • Learn how to integrate Hugging Face endpoints to effortlessly incorporate their diverse range of pre-trained models into your applications.
  • Mastering the art of crafting effective prompts using templates empowers you to build versatile chatbot applications across different domains.

Refrences

Frequently Asked Questions

Q1. How does integrating Mistral AI’s models with LangChain benefit the performance of a travel assistant chatbot?

A. Integrating Mistral AI’s models with LangChain boosts the chatbot’s performance by utilizing advanced functionalities like extensive context windows and optimized attention mechanisms. This integration accelerates response times and enhances the accuracy of handling intricate travel inquiries, thereby elevating user satisfaction and interaction quality.

Q2.What role does LangChain play in developing a travel assistant chatbot?

A. LangChain provides a framework for building applications with large language models (LLMs). It offers tools like ChatPromptTemplate for crafting prompts and StrOutputParser for processing model outputs. LangChain simplifies the integration of Hugging Face models into your chatbot, enhancing its functionality and performance.

Q3.Why is it beneficial to deploy chatbots on Hugging Face Spaces?

A. Hugging Face Spaces provides a collaborative platform where developers can deploy, share, and iterate on chatbot applications, fostering innovation and community-driven improvements.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

My name is Chirag Solanki, and I am pursuing my Bachelor's in Artificial Intelligence & Data Science from India 🎓. I am a Full Stack Data Science Enthusiast and passionate about Open Source 💻. I have the skills to build innovative projects in Machine Learning, Computer Vision, Natural Language Processing, and Power-BI. I enjoy creating blogs and articles on the Data Science domain. I believe in learning in public and love meeting awesome people from around the globe.

Responses From Readers

Clear

Flash Card

What is a Travel Assistant Chatbot?

A Travel Assistant Chatbot is like having a personal travel guide or friend who helps you plan your trips, all powered by AI. It’s a smart program that you can chat with to get advice, suggestions, and answers to your travel questions. Instead of browsing through tons of websites or doing endless searches, you can just ask the chatbot, and it will do the work for you.

What Can a Travel Assistant Chatbot Do?

  • Plan Itineraries: It helps create personalized travel plans based on what you like to do, your budget, and how long you’re staying.
  • Recommend Flights and Hotels: It finds flight options, hotel deals, and compares different choices to help you pick the best one.
  • Give Local Tips: It can tell you about popular attractions, local food spots, and cultural advice for wherever you’re going.
  • Translate Languages: If you’re going to a place where you don’t speak the language, the chatbot can translate common phrases to help you out.
  • Create Packing Lists: It helps you remember what to pack by giving you a checklist based on your destination and activities.
  • Answer FAQs: Whether it’s about visa requirements, travel rules, or the best time to visit a place, the chatbot can provide quick answers.

How Does It Work?
The chatbot uses advanced tools and models, like those from Hugging Face and Langchain, to understand what you’re asking and give helpful responses. It can handle tasks like finding flights, recommending hotels, suggesting activities, and even giving local tips—all through a simple chat interface.

What is a Travel Assistant Chatbot?

Quiz

Which of the following is NOT a feature of a Travel Assistant Chatbot?

Flash Card

How does Langchain facilitate the integration of Hugging Face models into applications like a travel assistant chatbot?

Langchain simplifies the process for developers to build, customize, and deploy applications powered by large language models (LLMs). It allows easy connection and utilization of models from platforms like Hugging Face. By setting clear instructions and connecting different components, Langchain helps manage user inquiries about booking flights, hotels, rental cars, and offering travel tips efficiently.

Quiz

What role does Langchain play in integrating Hugging Face models into a travel assistant chatbot?

Flash Card

Why is crafting effective prompts important for a travel assistant chatbot?

Crafting effective prompts is crucial to optimize the chatbot's performance in travel planning and advisory roles. Well-designed prompts ensure that the chatbot understands user queries accurately and provides relevant, helpful responses. This enhances the user experience by making interactions with the chatbot more intuitive and productive.

Quiz

Why is it important to create effective prompts for a travel assistant chatbot?

Flash Card

What are the benefits of developing an AI-powered chatbot platform for travel planning?

An AI-powered chatbot platform enables seamless, anytime trip planning, saving users time and money. It provides transparent cost-saving insights, helping users make informed decisions about their travel plans. The platform enhances user convenience by offering personalized travel advice and recommendations.

Quiz

What is one of the main benefits of an AI-powered chatbot platform for travel planning?

Flash Card

What are the steps involved in deploying a Travel Assistant Chatbot on Hugging Face Space?

Navigate to the Hugging Face Spaces Dashboard to begin the deployment process. Create a new space for the chatbot application. Configure the necessary environment variables for the application. Upload the model repository to the created space. Commit the changes to deploy the application on HF_SPACE. Once these steps are completed, the Travel Assistant Chatbot Application is successfully deployed on HF_SPACE.

What are the steps involved in deploying a Travel Assistant Chatbot on Hugging Face Space?

Quiz

Which of the following is a step in deploying a Travel Assistant Chatbot on Hugging Face Space?

Flash Card

How does the Travel Assistant Chatbot improve user experience in travel planning?

The chatbot acts as a knowledgeable friend, providing guidance and assistance throughout the travel planning process. It offers quick and accurate responses to user queries, enhancing the efficiency of travel arrangements. By integrating various travel-related functionalities, the chatbot simplifies complex tasks like booking and itinerary management.

Quiz

How does the Travel Assistant Chatbot enhance user experience in travel planning?

Flash Card

What role does Hugging Face play in the development of the Travel Assistant Chatbot?

Hugging Face provides the models and tools necessary for building the chatbot's language processing capabilities. It enables the integration of advanced AI models that understand and respond to user queries effectively. Hugging Face's resources help in creating a robust and intelligent chatbot that can handle diverse travel-related questions.

Quiz

What is Hugging Face's role in developing the Travel Assistant Chatbot?

Flash Card

How does the integration of open-source models benefit the Travel Assistant Chatbot?

Open-source models offer flexibility and customization options for developers building the chatbot. They allow for continuous improvement and updates, ensuring the chatbot remains relevant and effective. Using open-source models reduces development costs while providing access to cutting-edge AI technologies.

Quiz

What is one benefit of integrating open-source models into the Travel Assistant Chatbot?

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details