Build a Travel Assistant Chatbot with HuggingFace, LangChain, and MistralAI

cmsolanki 01 Jul, 2024
9 min read

Introduction

Planning a trip can be challenging these days. With so many choices for flights, hotels, and activities, travelers often find it difficult to pick the best options. Our Yatra Sevak.Ai chatbot is here to help. Imagine having a personal travel assistant at your fingertips someone who can book flights, find great hotels, recommend local attractions, and offer travel advice. Thanks to advanced AI, this is now possible.

This article shows how to build a smart Travel Assistant Chatbot using MistralAI, Langchain, Hugging Face, and Streamlit. The explanation covers how these technologies work together to create a chatbot that acts like a knowledgeable friend guiding you through your travel plans. Discover how AI can make travel planning easier and more enjoyable for everyone.

Learning Objective

  • Learn how to build a Comprehensive Travel Assistant Chatbot using HuggingFace, Langchain, and open-source models without relying on paid APIs.
  • Learn how to seamlessly integrate Hugging Face models into a Streamlit application for interactive user experiences.
  • Master the art of crafting effective prompts to optimize chatbot performance in travel planning and advisory roles.
  • Develop an AI-powered chatbot platform enabling seamless, anytime trip planning to save users time and money while providing transparent cost-saving insights.

This article was published as a part of the Data Science Blogathon.

How Travel Assistance can Revolutionize Travel Industry?

  • Weather-based Recommendations: AI chatbots suggest alternative plans in case of adverse weather conditions at the destination, allowing users to adjust their schedule promptly.
  • Gamification and Engagement: AI chatbots incorporate travel quizzes, loyalty rewards, and interactive guides to enhance the travel planning experience with enjoyable and engaging elements.
  • Crisis Management and Real-Time Updates: Chatbots offer immediate assistance during travel disruptions and provide timely updates, a capability that traditional services often struggle to deliver.
  • Multilingual Support and Cultural Sensitivity: Chatbots communicate in multiple languages and provide culturally relevant advice, catering effectively to international travelers better than traditional websites.
  • Instant Trip Adjustment : Users can instantly change their trip itinerary based on their requirements, facilitated by AI chatbots dynamic response capabilities.
  • Continuous Advisor Presence: Chatbots ensure an always-on advisory presence throughout the trip, offering guidance and support whenever needed.

What is HuggingFace ?

HuggingFace is an open-source platform for machine learning and natural language processing. It offers tools for creating, training, and deploying models, and hosts thousands of pre-trained models for tasks like computer vision, audio analysis, and text summarization. With over 30,000 datasets available, developers can train AI models and share their code within the community. Users can also showcase their projects through ML demo apps called Spaces, promoting collaboration and sharing in the AI community.

What is Langchain?

LangChain is an open source framework for building applications based on large language models. It provides modular components for creating complex workflows, tools for efficient data handling, and supports integrating additional tools and libraries. Langchain makes it easy for developers to build, customize, and deploy LLM-powered applications.

What is Langchain ?

For example, in a Yatra Sevak.Ai chatbot application, Langchain makes it easier to connect and use models from platforms like Hugging Face. By setting clear instructions and connecting different parts, developers can efficiently handle user questions about booking flights, hotels, rental cars, and providing travel tips. This makes the chatbot faster and more accurate, speeding up development by using pre-trained models effectively.

What is Mistral AI ?

Mistral AI is a cutting-edge platform specializing in large language models (LLMs) These models excel across multiple languages such as English, French, Italian, German, and Spanish, demonstrating robust capabilities in handling code. They offer high context windows, native function calling capacities, and JSON outputs, making them versatile and suitable for various application

Architectural Detail of Mistral-7B

Mistral-7B is a decoder-only Transformer with the following architectural choices:

  • Sliding Window Attention: Trained with 8k context length and fixed cache size, with a theoretical attention span of 128K tokens.
  • GQA (Grouped Query Attention): allowing faster inference and lower cache size.
  • Byte-fallback BPE tokenizer: Ensures that characters are never mapped to out of vocabulary tokens.

Types of Mistral AI Model

Mistral 7 B (open source) Mistral 8x7B (open source) Mistral 8x22B (open source) Mistral small (optimized Model) Mistral large (optimized Model) MistralEmbed (optimized Model)
7B transformer, fast-deployed, easily customizable 7B sparse Mixture-of-Experts, 12.9B active params (45B total) 22B sparse Mixture-of-Experts, 39B active params (141B total) Cost-efficient reasoning, low-latency workloads Top-tier reasoning, high-complexity tasks State-of-the-art semantic, text re-presentation extraction

Workflow of Yatra Sevak.AI

Workflow of Yatra Sevak.AI
  • User Interaction: The user interacts with the Streamlit frontend to input queries.
  • Chat Handling Logic:The application captures the user’s input, updates the session state, and adds the input to the chat history.
  • Response Generation (LangChain Integration):
    • The get_response function sets up the Hugging Face endpoint and uses LangChain tools to format and interpret the responses.
    • LangChain’s ChatPromptTemplate and StrOutputParser are used to format the the prompt and parse the output.
  • API Interaction: The application retrieves the API token from environment variables and interacts with Hugging Face’s API to generate text responses with the Mistral AI model.
  • Generate Response:The response is generated using the Hugging Face model invoked through LangChain.
  • Send Response Back: The generated response is appended to the chat history and displayed on the frontend.
  • Streamlit Frontend: The frontend is updated to show the AI’s response, completing the interaction cycle.

Steps to Build a Travel Assistant LLM Chatbot (Yatra Sevak.Ai)

Let us now build a travel assistant LLM Chatbot by following the steps given below.

Step1: Importing Required Libraries

Before diving into coding, ensure your environment is ready:

  • Create requirements.txt file and Install Required Libraries using command: pip install – requirements.txt
streamlit
python-dotenv
langchain-core
langchain-community
huggingface-hub
  • Create app.py file in your project directory & import necessary libraries.
import os
import streamlit as st
from dotenv import load_dotenv
from langchain_core.messages import AIMessage, HumanMessage
from langchain_community.llms import HuggingFaceEndpoint
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
  • Import os module, Provides a way to interact with the operating system, facilitating tasks like environment variable handling.
  • Streamlit is used to create interactive web applications for machine learning and data science.
  • load_dotenv Allows loading environment variables from a .env file, enhancing security by keeping sensitive information separate.
  • from langchain_core.messages import AIMessage, HumanMessage: These classes facilitate structured message handling within the chatbot application, ensuring clear communication between the AI and users.
  • from langchain_community.llms import HuggingFaceEndpoint: This class integrates with Hugging Face’s models and APIs within the LangChain framework.
  • from langchain_core.output_parsers import StrOutputParser: This component parses and processes textual output from the chatbot’s responses.
  • from langchain_core.prompts import ChatPromptTemplate: Defines templates or formats for prompting the AI model with user queries.

Step2: Setting Up Environment and API Token

  • Process of Accessing Hugging Face API:
    • Log in to your Hugging Face account.
    • Navigate to your account settings.
Setting Up Environment and API Token
Travel Assistant Chatbot
Travel Assistant Chatbot
Travel Assistant Chatbot
  • Generate API Token: If you haven’t already, generate an API token following above steps. This token is used to authenticate your application when interacting with Hugging Face’s APIs.
  • Set Up .env File: Create a .env file in your project directory to securely store sensitive information such as API tokens. Use a text editor to create and edit this file.
Travel Assistant Chatbot
#After importing all libraries and setting up envirnoment. in app.py write these line.
load_dotenv()  ## Load environment variables from .env file
  • load_dotenv() : Loads environment variables from a .env file located in the project directory.

Step3: Configuring Model and Task

# Define the repository ID and task
repo_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
task = "text-generation"
  • In this section, we define the model and task for our chatbot. The repo_id specifies the particular model we are using, in this case, “mistralai/Mixtral-8x7B-Instruct-v0.1”
  • You can customize this to different models that best fit the specific needs of your chatbot application.
  • Task Defines the specific task the chatbot performs with the model (text-generation for generating text responses).

Step4: Streamlit Configuration

# App config
st.set_page_config(page_title="Yatra Sevak.AI",page_icon= "🌍")
st.title("Yatra Sevak.AI ✈️")

Step5: Defining Chatbot Template

  • For optimal results, utilize the prompt template available on my GitHub repository to create robust prompts for your travel assistant chatbot.
  • github link
Travel Assistant Chatbot

Step6: Implementing Response Handling

prompt = ChatPromptTemplate.from_template(template)

# Function to get a response from the model
def get_response(user_query, chat_history):
    # Initialize the Hugging Face Endpoint
    llm = HuggingFaceEndpoint(
        huggingfacehub_api_token=api_token,
        repo_id=repo_id,
        task=task
    )
    chain = prompt | llm | StrOutputParser()
    response = chain.invoke({
        "chat_history": chat_history,
        "user_question": user_query,
    })
    return response
  • get_response function: It is the core of Yatra Sevak.AI’s response generation process.
  • Initialization: Yatra Sevak.AI connects to Hugging Face’s models using credentials (api_token) and specifies the model details (repo_id and task) for text generation.
  • Interaction Flow: Using LangChain’s tools (ChatPromptTemplate and StrOutputParser), it manages user queries (user_question) and keeps track of conversation history (chat_history).
  • Response Generation: By invoking the model , Yatra Sevak.AI processes user inputs to generate clear and helpful responses, improving interaction for travel-related queries.

Step7: Managing Chat History

# Initialize session state.
if "chat_history" not in st.session_state:
    st.session_state.chat_history = [
        AIMessage(content="Hello, I am Yatra Sevak.AI How can I help you?"),
    ]
# Display chat history.
for message in st.session_state.chat_history:
    if isinstance(message, AIMessage):
        with st.chat_message("AI"):
            st.write(message.content)
    elif isinstance(message, HumanMessage):
        with st.chat_message("Human"):
            st.write(message.content)
  • Initializes and manages the chat history within Streamlit’s session state, displaying AI and human messages in the user interface.

Step8: Handling User Input and Displaying Responses

# User input
user_query = st.chat_input("Type your message here...")
if user_query is not None and user_query != "":
    st.session_state.chat_history.append(HumanMessage(content=user_query))

    with st.chat_message("Human"):
        st.markdown(user_query)

    response = get_response(user_query, st.session_state.chat_history)

    # Remove any unwanted prefixes from the response u should use these function but 
#before using it I requestto[replace("bot response:", "").strip()] combine 1&2 to run without error.

    #1.response = response.replace("AI response:", "").replace("chat response:", "").
    #2.replace("bot response:", "").strip()

    with st.chat_message("AI"):
        st.write(response)

    st.session_state.chat_history.append(AIMessage(content=response)) 

Travel assistance Chatbot application is ready !

Travel Chatbot

Complete Code Repository

Explore Yatra Sevak.AI Application on GitHub here. Using this link, you can access the full code. Feel free to explore and utilize it as needed.

Steps to Deploy Travel Assistant Chatbot Application on Hugging Face Space

  • Step1: Navigate to Hugging Face Spaces Dashboard.
  • Step2: Create a New Space.
Create new space
Step 2: Create a New Space.
  • Step3: Configure Environment Variables
    • Click on Settings.
    • Click on New Secret options and Add name HUGGINGFACEHUB_API_TOKEN and your key value.
Configure Environment Variables
new secret
  • Step4: Upload Your Model Repository
    • Upload all the files in File section of Space.
    • Commit Changes to Deploy on HF_SPACE.
Hugging face
  • Step5: Travel Assistant Chatbot Application Deployed on HF_SPACE successfully!!.
Travel Assistant Chatbot
Travel Assistant Chatbot

Conclusion 

In this article, we explored how to build a travel assistant chatbot(Yatra Sevak.AI) using HuggingFace, LangChain, and other advanced technologies. From setting up the environment and integrating Hugging Face models to defining prompts and deploying on Hugging Face Spaces, we covered all the essential steps. With Yatra Sevak.AI, you now have a powerful tool to enhance travel planning through AI-driven assistance.

Key Takeaways

  • Learn to build a powerful language model chatbot using Hugging Face endpoints without relying on costly APIs, empowering cost-effective AI integration.
  • Learn how to integrate Hugging Face endpoints to effortlessly incorporate their diverse range of pre-trained models into your applications.
  • Mastering the art of crafting effective prompts using templates empowers you to build versatile chatbot applications across different domains.

Refrences

Frequently Asked Questions

Q1. How does integrating Mistral AI’s models with LangChain benefit the performance of a travel assistant chatbot?

A. Integrating Mistral AI’s models with LangChain boosts the chatbot’s performance by utilizing advanced functionalities like extensive context windows and optimized attention mechanisms. This integration accelerates response times and enhances the accuracy of handling intricate travel inquiries, thereby elevating user satisfaction and interaction quality.

Q2.What role does LangChain play in developing a travel assistant chatbot?

A. LangChain provides a framework for building applications with large language models (LLMs). It offers tools like ChatPromptTemplate for crafting prompts and StrOutputParser for processing model outputs. LangChain simplifies the integration of Hugging Face models into your chatbot, enhancing its functionality and performance.

Q3.Why is it beneficial to deploy chatbots on Hugging Face Spaces?

A. Hugging Face Spaces provides a collaborative platform where developers can deploy, share, and iterate on chatbot applications, fostering innovation and community-driven improvements.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

cmsolanki 01 Jul, 2024

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear