Chatbots have become an integral part of modern applications, providing users with interactive and engaging experiences. In this guide, we’ll create a chatbot using LangChain, a powerful framework that simplifies the process of working with large language models. Our chatbot will have the following key features:
By the end of this article, you’ll have a fully functional chatbot that you can further customize and integrate into your own projects. Whether you’re new to LangChain or looking to expand your AI application development skills, this guide will provide you with a solid foundation for creating intelligent, contextaware chatbots.
Before diving into this article, you should have:
To follow along with this tutorial, you’ll need:
LangChain is an opensource framework designed to simplify the development of applications using large language models (LLMs). It provides a set of tools and abstractions that make it easier to build complex, contextaware applications powered by AI. Some key features of LangChain include:
Now that we’ve covered what LangChain is, how we’ll use it, and the prerequisites for this tutorial, let’s move on to setting up our development environment.
Before we dive into the code, let’s set up our development environment. We’ll need to install several dependencies to get our chatbot up and running.
First, make sure you have Python 3.7 or later installed on your system. Then, create a new directory for your project and set up a virtual environment:
Terminal:
mkdir langchainchatbot
cd langchainchatbot
python m venv venv
source venv/bin/activate
Now, install the required dependencies:
pip install langchain openai pythondotenv colorama
Next, create a .env file in your project directory to store your OpenAI API key:
OPENAI_API_KEY=your_api_key_here
Replace your_api_key_here with your actual OpenAI API key.
Our chatbot implementation consists of several key components:
Let’s break down each of these components in detail.
Let us now look into the steps to implement chatbot.
First, let’s import the necessary modules and libraries:
import time
from typing import List, Tuple
import sys
from colorama import Fore, Style, init
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.memory import ConversationBufferWindowMemory
from langchain.schema import SystemMessage, HumanMessage, AIMessage
from langchain.schema.runnable import RunnablePassthrough, RunnableLambda
from operator import itemgetter
These imports provide us with the tools we need to create our chatbot:
Next, let’s define some utility functions that will help us manage our chatbot:
def format_message(role: str, content: str) > str:
return f"{role.capitalize()}: {content}"
def get_chat_history(memory) > List[Tuple[str, str]]:
return [(msg.type, msg.content) for msg in memory.chat_memory.messages]
def print_typing_effect(text: str, delay: float = 0.03):
for char in text:
sys.stdout.write(char)
sys.stdout.flush()
time.sleep(delay)
print()
The heart of our chatbot is the run_chatgpt_chatbot function. Let’s break it down into smaller sections:
def run_chatgpt_chatbot(system_prompt='', history_window=30, temperature=0.3):
#Initialize the ChatOpenAI model
model = ChatOpenAI(model_name='gpt3.5turbo', temperature=temperature)
Set the system prompt
if system_prompt:
SYS_PROMPT = system_prompt
else:
SYS_PROMPT = "Act as a helpful AI Assistant"
Create the chat prompt template
prompt = ChatPromptTemplate.from_messages(
[
('system', SYS_PROMPT),
MessagesPlaceholder(variable_name='history'),
('human', '{input}')
]
)
This function does the following:
Finally, we add a conditional block to run the chatbot when the script is executed directly:
if __name__ == "__main__":
run_chatgpt_chatbot()
This allows us to run the chatbot by simply executing the Python script.
Our chatbot implementation includes several advanced features that enhance its functionality and user experience.
Users can view the chat history by typing ‘HISTORY’. This feature leverages the ConversationBufferWindowMemory to store and retrieve past messages:
elif user_input.strip().upper() == 'HISTORY':
chat_history = get_chat_history(memory)
print("\n Chat History ")
for role, content in chat_history:
print(format_message(role, content))
print(" End of History \n")
continue
Users can clear the conversation memory by typing ‘CLEAR’. This resets the context and allows for a fresh start:
elif user_input.strip().upper() == 'CLEAR':
memory.clear()
print_typing_effect('ChatGPT: Chat history has been cleared.')
continue
The chatbot measures and displays the response time for each interaction, giving users an idea of how long it takes to generate a reply:
start_time = time.time()
reply = conversation_chain.invoke(user_inp)
end_time = time.time()
response_time = end_time start_time
print(f"(Response generated in {response_time:.2f} seconds)")
Our chatbot implementation offers several customization options:
To customize these options, you can modify the function call in the if __name__ == “__main__”: block:
if __name__ == "__main__":
run_chatgpt_chatbot(
system_prompt="You are a friendly and knowledgeable AI assistant specializing in technology.",
history_window=50,
temperature=0.7
)
import time
from typing import List, Tuple
import sys
import time
from colorama import Fore, Style, init
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.memory import ConversationBufferWindowMemory
from langchain.schema import SystemMessage, HumanMessage, AIMessage
from langchain.schema.runnable import RunnablePassthrough, RunnableLambda
from operator import itemgetter
def format_message(role: str, content: str) -> str:
return f"{role.capitalize()}: {content}"
def get_chat_history(memory) -> List[Tuple[str, str]]:
return [(msg.type, msg.content) for msg in memory.chat_memory.messages]
def run_chatgpt_chatbot(system_prompt='', history_window=30, temperature=0.3):
model = ChatOpenAI(model_name='gpt-3.5-turbo', temperature=temperature)
if system_prompt:
SYS_PROMPT = system_prompt
else:
SYS_PROMPT = "Act as a helpful AI Assistant"
prompt = ChatPromptTemplate.from_messages(
[
('system', SYS_PROMPT),
MessagesPlaceholder(variable_name='history'),
('human', '{input}')
]
)
memory = ConversationBufferWindowMemory(k=history_window, return_messages=True)
conversation_chain = (
RunnablePassthrough.assign(
history=RunnableLambda(memory.load_memory_variables) | itemgetter('history')
)
| prompt
| model
)
print_typing_effect("Hello, I am your friendly chatbot. Let's chat!")
print("Type 'STOP' to end the conversation, 'HISTORY' to view chat history, or 'CLEAR' to clear the chat history.")
while True:
user_input = input('User: ')
if user_input.strip().upper() == 'STOP':
print_typing_effect('ChatGPT: Goodbye! It was a pleasure chatting with you.')
break
elif user_input.strip().upper() == 'HISTORY':
chat_history = get_chat_history(memory)
print("\n--- Chat History ---")
for role, content in chat_history:
print(format_message(role, content))
print("--- End of History ---\n")
continue
elif user_input.strip().upper() == 'CLEAR':
memory.clear()
print_typing_effect('ChatGPT: Chat history has been cleared.')
continue
user_inp = {'input': user_input}
start_time = time.time()
reply = conversation_chain.invoke(user_inp)
end_time = time.time()
response_time = end_time - start_time
print(f"(Response generated in {response_time:.2f} seconds)")
print_typing_effect(f'ChatGPT: {reply.content}')
memory.save_context(user_inp, {'output': reply.content})
if __name__ == "__main__":
run_chatgpt_chatbot()
Output:
When working with this chatbot implementation, consider the following best practices and tips:
In this comprehensive guide, we’ve built a powerful chatbot using LangChain and OpenAI’s GPT3.5turbo This chatbot serves as a solid foundation for more complex applications. You can extend its functionality by adding features like:
By leveraging the power of LangChain and large language models, you can create sophisticated conversational AI applications that provide value to users across various domains. Remember to always consider ethical implications when deploying AIpowered chatbots, and ensure that your implementation adheres to OpenAI’s usage guidelines and your local regulations regarding AI and data privacy.
With this foundation, you’re wellequipped to explore the exciting world of conversational AI and create innovative applications that push the boundaries of human computer interaction.
A. You should have an intermediate understanding of Python programming, familiarity with API concepts, the ability to set up a Python environment and install packages, and an OpenAI account to obtain an API key.
A. You need Python 3.7 or later, pip (Python package installer), an OpenAI API key, and a text editor or integrated development environment (IDE) of your choice.
A. The key components are import statements for required libraries, utility functions for formatting messages and retrieving chat history, the main function (run_chatgpt_chatbot) that sets up and runs the chatbot, and a conditional block to execute the chatbot script directly.