A Complete Guide on Chatbot Development Using Python

Basil Saji Last Updated : 30 May, 2022
6 min read

This article was published as a part of the Data Science Blogathon.

Introduction

Natural Language processing is one of the advanced fields of artificial intelligence which makes the systems understand and process the human language. The main use-case of NLP can be seen in chatbot development, spam classification, and text summarization. In today’s article, we’re going to discuss the chatbot application of NLP.

What is a Chatbot?

Chatbots are computer software programs that can interact with humans. With the advancement in machine learning mainly natural language processing, everyone started to create intelligent chatbot systems. You can see different types of chatbots on different websites, chatbots for booking airline tickets on Airline company websites, customer support chatbots in different apps, etc… are such examples. Do you want to create one such chatbot!!. Let’s have an amazing session on chatbot development in today’s article.

Chatbot Development

Source: Salesforce

Implementation

Virtual Environment Creation

Before starting the coding part of our chatbot development, let’s create a virtual environment for the chatbot. The python library that we are using to create the virtual environment is “virtualenv”.

So first of all let’s install virtualenv(In the command prompt)

pip install virtualenv

Now we can create our virtual environment named my_env, so take the terminal in the vscode or any code editor and write the below code.

virtualenv my_env

Next is to activate our virtual environment

Activation in windows power shell

my_envScriptsactivate.ps1

Activation in command prompt

my_envScriptsactivate.bat

The virtual environment is activated.

Installation of Libraries

Now we have to install the libraries required for this project separately in this environment.

pip install keras nltk tensorflow

Creating Intents File

First of all, let’s look into our intents_file.json file. This intents file contains the different patterns of the question that the user might enquire and the possible output for the specific question and a tag for that type of question

{"intents": [
    {"tag": "greetings",
     "patterns": ["Hello there", "Hey, How are you", "Hey", "Hi", "Hello", "Anybody", "Hey there"],
     "responses": ["Hello, I'm your helping bot", "Hey it's good to see you", "Hi there, how can I help you?"],
     "context": [""]
    },
    {"tag": "thanks",
     "patterns": ["Thanks for your quick response", "Thank you for providing the valuable information", "Awesome, thanks for helping"],
     "responses": ["Happy to help you", "Thanks for reaching out to me", "It's My pleasure to help you"],
     "context": [""]
    },
    {"tag": "no_answer",
     "patterns": [],
     "responses": ["Sorry, Could you repeat again", "provide me more info", "can't understand you"],
     "context": [""]
    },
    {"tag": "support",
     "patterns": ["What help you can do?", "What are the helps you provide?", "How you could help me", "What support is offered by you"],
     "responses": [ "ticket booking for airline", "I can help you to book flight tickets easily"],
     "context": [""]
    },
    {"tag": "goodbye",
        "patterns": ["bye bye", "Nice to chat with you", "Bye", "See you later buddy", "Goodbye"],
        "responses": [ "bye bye, thanks for reaching", "Have a nice day there", "See you later"],
        "context": [""]
    }
]
}

The above shows the intents file that we are going to use in our project

Implementation

Now Let’s start to create a machine learning model which can respond to the user query based on the intents file.

Importing some of the required libraries for our project.

import numpy as np
import nltk
import json
import pickle
import re
import random
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout
from tensorflow.keras.optimizers import SGD
from nltk.stem import WordNetLemmatizer

We have to download some nltk packages for processing the data.

nltk.download('punkt')
nltk.download('wordnet')

Preprocessing

Loading the JSON file and reading it. Also, we are initializing some lists for saving the preprocessed and preprocessing data.

tokenized_words=[]
classes = []
doc = []
ignoring_words = ['?', '!']
data_file = open('intents_file.json').read()
intents = json.loads(data_file)

We loaded the JSON file. Now we have to perform some preprocessing, we are going to iterate through each of the patterns questions in the intents file and tokenize it. This tokenized text along with the tag is stored as a list. tokenized_words contains all the different words in the intents file which is tokenized using nltk.

for intent in intents['intents']:
    for pattern in intent['patterns']:
        w = nltk.word_tokenize(pattern) #tokenizing
        tokenized_words.extend(w)
        doc.append((w, intent['tag']))
        if intent['tag'] not in classes:
            classes.append(intent['tag'])

Now we have to perform lemmatization on the data and need to remove the question tag and other ignoring words

lemmatizer = WordNetLemmatizer()
lemmatized_words = [lemmatizer.lemmatize(words.lower()) for words in tokenized_words if w not in ignoring_words] #lemmatization

Then we are sorting the unique lemmatized words and classes.

lemmatized_words = sorted(list(set(lemmatized_words))) 
classes = sorted(list(set(classes)))

Now saving the lemmatized words and classes into a pickle file

pickle.dump(lemmatized_words,open('lem_words.pkl','wb'))
pickle.dump(classes,open('classes.pkl','wb'))

As the next step, we need to create our training data. The input feature is the bag of words model of questions that the user is asking and the output feature is the tag or class that the input question pattern belongs to.

training_data = []

empty_array = [0] * len(classes)

for d in doc:

    bag_of_words = []

    pattern = d[0]

    pattern = [lemmatizer.lemmatize(word.lower()) for word in pattern]

    for w in lemmatized_words:

        bag_of_words.append(1) if w in pattern else bag_of_words.append(0)

    output_row = list(empty_array)

    output_row[classes.index(d[1])] = 1

    training_data.append([bag_of_words, output_row])

random.shuffle(training_data)

training = np.array(training_data)

train_x = list(training[:,0])

train_y = list(training[:,1])
Model Creation

Now we can create our Neural network model. With the help of Keras and TensorFlow library, we are creating the Model. So let’s start the implementation.

First of all, we are creating a Sequential model and then adding layers to this sequential model.

bot_model = Sequential()
bot_model.add(Dense(128, input_shape=(len(x_train[0]),), activation='relu'))
bot_model.add(Dropout(0.5))
bot_model.add(Dense(64, activation='relu'))
bot_model.add(Dropout(0.5))
bot_model.add(Dropout(0.25))
bot_model.add(Dense(len(y_train[0]), activation='softmax'))

We’ve created our model. Next is to compile our model with the stochastic gradient descent feature.

sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
bot_model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])

Now let’s fit the model

x_train = np.array(x_train)
y_train = np.array(y_train)
hist = bot_model.fit(x_train, y_train, epochs=200, batch_size=5, verbose=1)

We created our chatbot model so we can save this model for future use.

bot_model.save('chatbot_model.h5', hist)

Testing the Model

Now let’s take another python file for testing and creating our actual chatbot

Importing the required libraries.

import pickle
import numpy as np
import json
from keras.models import load_model
import random
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()

Next is to load our models and pickle files that we are saved during the training timing

intents_file = json.loads(open('intents.json').read())
lem_words = pickle.load(open('lem_words.pkl','rb'))
classes = pickle.load(open('classes.pkl','rb'))
bot_model = load_model('chatbot_model.h5')

Creating a function that takes the user input as a parameter for performing some preprocessing techniques like tokenization and stemming.

def cleaning(text):
    words = nltk.word_tokenize(text)
    words = [lemmatizer.lemmatize(word.lower()) for word in words]
    return words

Our model requires numerical features for the prediction of classes, so we are creating another function for creating the bag of words model for the preprocessed text.

def bag_ow(text, words, show_details=True):
    sentence_words = cleaning(text)
    bag_of_words = [0]*len(words) 
    for s in sentence_words:
        for i,w in enumerate(words):
            if w == s: 
                bag_of_words[i] = 1
    return (np.array(bag_of_words))

Creating a prediction function for predicting the classes or tags of the question that are asked by the user.

def class_prediction(sentence, model):
    p = bag_ow(sentence, lem_words,show_details=False)
    result = bot_model.predict(np.array([p]))[0]
    ER_THRESHOLD = 0.30
    f_results = [[i,r] for i,r in enumerate(result) if r > ER_THRESHOLD]
    f_results.sort(key=lambda x: x[1], reverse=True)
    intent_prob_list = []
    for i in f_results:
        intent_prob_list.append({"intent": pred_class[i[0]], "probability": str(i[1])})
    return intent_prob_list

Now we are having the predicted classes or tags based on the inquiry of the user. As you can see in the intents file there are more than one response for each tag, so we are creating a function for selecting a random response from the predicted tag and sending it as a bot response.

def getbotResponse(ints, intents):
    tag = ints[0]['intent']
    intents_list = intents['intents']
    for intent in intents_list:
        if(intent['tag']== tag):
            result = random.choice(intent['responses'])
            break
    return result
def bot_response(text):
    ints = class_prediction(text, bot_model)
    response = getbotResponse(ints, intents)
    return response

Interacting with Chatbot

We created several functions for the working of the chatbot. So let’s talk to our chatbot.

for i in range(3):
  text = input("You : ")
  print("Bot : ",bot_response(text))

Output

You : hey
Bot :  Hi there, how can I help you?
You : what help can you do
Bot :  ticket booking for airline
You : bye
Bot :  See you later

As you can see the chatbot is responding very well to us.

Conclusion

In this article, we’ve briefly discussed chatbot development from scratch. And you got the idea about the working flow or data flow of chatbot making and prediction of the response. The main key insights from this article are

  • Created a virtual environment for the project
  • Created a JSON file including the possible pattern of questions and possible output response
  •  Preprocessing the text file and the creation of a deep learning chatbot model
  • Created several functions for communicating with the chatbot and for receiving the chatbot response

These are the primary outcomes of the above article. You can customize the chatbot by editing the intents JSON file. Try to create your own chatbot by referring to this article. Hope you all liked this article.

Thank you!!

Connect with me on LinkedIn.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Python Developer, ML Enthusiast, Blogger and an Electronics and Communication Engineering aspirant determined and motivated to finish tasks with atmost sincerity and dedication.I'am a good learner who ready to accept challenges to bring up my best even in the worst. Wish for a world with enough advancements and opportunities for the people.

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details