How do I create a chatbot in python?

In this tutorial, you will learn how to create a chatbot project in Python.

Plan de l'article

Introduction

Chatbots are very useful for both businesses and customers.

Also to discover : Why do we use drones?

Most people prefer to speak directly from a chat box than to call service centers.

You may like : 8 Tips and Tricks for Google Keep

Facebook has published data that proved the value of robots. Indeed, more than 2 billion messages are sent per month between individuals and businesses.

According to HubSpot research, 71% of people want customer support from email apps. So it’s a quick way to solve their problems.

That said, Chatbots have a great future in businesses.

Now let’s create an exciting project on Chatbot. We will implement a chatbot from scratch .

This software will understand what the user is talking about and can give an appropriate answer.

What to have beforehand

To implement the chatbot, we will first use Keras. It is a Deep Learning library, NLTK, which is a natural language processing toolkit or Natural Language Processing (NLP). We also need some useful libraries. Run the command below to ensure that all libraries are installed:

pip install tensorflow keras pickle nltk If you want to learn Python for free, here is the main guide.

If you want to know the best Python hosting, read our guide.

How do Chatbots work?

Chatbots are a smart software that can interact and communicate with people like human beings.

Interesting, right?

Now let’s see how they actually work.

All chatbots fall under NLP (Natural Language Processing) concepts. The NLP includes two things:

  • NLU (Natural Language Understanding): It is the ability of machines to understand human language like English. French etc…
  • NLG (Natural Language Generation): This is the ability of a machine to generate text similar to sentences written by humans.

Imagine a user asking a question to a chatbot:

Hey, what’s the news from today? The chatbot will divide the user phrase into two things:

  1. Intent and
  2. Entity

The intent of this sentence could be have_news because it refers to an action that the user wants to do.

The entity gives specific details about intent, so “today” will be the entity.

Thus, a machine learning model is used to recognize the intentions and entities of the chat.

The structure of the project file

Once the project is complete, you will have all these files.

Let’s quickly see each file. This will give you an idea of how the project will be implemented.

  • Train_chatbot.py: In this file, we will create and train the deep learning model or deep learning. The latter will classify and identify what the user asks the robot.
  • Gui_Chatbot.py: It is in this file that we will create a graphical user interface to chat with our trained chatbot.
  • Inents.json: The intents file contains all the data that we will use to form the model. It includes a collection of tags with their corresponding templates and answers.
  • ChatBot_Model.h5: This is a hierarchical data format file in which we saved the weights and architecture of our formed model.
  • Classes.pkl: The pickle file can be used to save all tag names to be classified when we predict the message.
  • Words.pkl: The words.pkl pickle file contains all the unique words that make up the vocabulary of our template.

Download the source code and dataset: https://drive.google.com/drive/folders/1r6MrrdE8V0bWBxndGfJxJ4Om62dJ2OMP?usp=sharing

How to create your own chatbot?

Here are the 5 steps Simple for create this chatbot:

Step 1. Import libraries and load data

Create a new Python file and name it train_chatbot.

Then we will import all the required modules. Then we will read the JSON data file in our Python program.

Step 2. Pre-process data Python import numpy as np from keras.models import Sequential from keras.layers import Dense, Activation, Dropout from keras.optimizers import SGD import random import nltk from nltk.stem import WordnetLemmatizer lemmatizer = WordnetLemmatizer () import json import pickle intents_file = open (‘intents.json’) .read () intents = json.loads (intents_file)

The model does not take raw data. It is necessary to go through a lot of pre-treatments so that the machine can easily understand.

For textual data, there is several pretreatment techniques. The first is tokenization. This consists in dividing the sentences into words.

Looking at the intents file, we see that each tag contains a list of templates and answers.

We tokenize each template and add words to a list.

We also create a list of classes and documents to add all the intentions associated with the templates.

Python words= classes = documents = ignore_letters = for intent in intents: for pattern in intent: #tokenize each word word = nltk.word_tokenize (pattern) words.extend #add documents in the corpus documents.append ((word, intent)) # add to our classes list if intent not in classes: classes.append print (documents) Another technique is lemmatization.

It consists in converting words into lemma to reduce all canonical words.

As an example, the words play, play, play, etc. will all be replaced by playing.

Thus, we can reduce the total number of words in our vocabulary. So now we lemmatize each word and remove duplicate words.

Python # lemmaztize and lower each word and remove duplicates words = words = sorted (list (set (words))) # spell classes classes = sorted (list (set (classes))) # documents = combination between patterns and intents print (len (documents), “documents”) # classes = intents print (len (classes), “classes”, classes) # words = all words, vocabulary print (len (words), “unique lemmatized words”, words) pickle.dump (words, open (‘words.pkl’, ‘wb’)) pickle.dump (classes, open (‘classes.pkl’, ‘wb’)) At the end, the words contain the vocabulary of our project while the classes include all the entities to classify.

To save the Python object in a file, we used the pickle.dump () method. These files will be useful once the training is complete and when we plan chat.

Step 3. Create training and test data

To form the model, we will convert each input model into numbers.

First we will lemmatize every word of the model.

To do this, you need to create a list of zeros of the same length as the number of all words. We will set the value 1 only for indexes that contain the word in the templates. In the same way, we will create the output by setting the value 1 for the input class to which the model belongs.

Étape 4. Former le modèle Python # create the training data training = # create empty array for the output output_empty = * len (classes) # training set, bag of words for every sentence for doc in documents: # initializing bag of words bag = # list of tokenized words for the pattern word_patterns = doc # lemmatize each word – create base word, in attempt to represent related words word_patterns = # create the bag of words array with 1, if word is found in current pattern for word in words: bag.append(1) if word in word_patterns else bag.append(0) # output is a ‘0’ for each tag and ‘1’ for current tag (for each pattern) output_row = list(output_empty) output_row)] = 1 training.append() # shuffle the features and make numpy array random.shuffle(training) training = np.array(training) # create training and testing lists. X – patterns, Y – intents train_x = list(training) train_y = list(training) print(“Training data is created”

L’architecture de notre modèle sera un réseau neuronal composé de 3 couches denses.

La first layer contains 128 neurons, the second has 64 and the last one will have the same neurons as the number of classes.

Stall layers are introduced to reduce the over-fit of the model. We used the SGD optimizer and adjusted the data to start forming the model.

When the formation of 200 eras is complete, we save the formed model using the Keras model.save function (“chatbot_model.h5”).

Step 5. Interact with the chatbot Python # deep neural networds model model = Sequential () model.add (Dense (128, input_shape= (len (train_x),), activation=’relu’)) model.add (Dropout (0.5)) model.add (Dense (64, activation=’relu’)) model.add (Dropout (0.5)) model.add (Dense (len (train_y), activation=’softmax’)) # Compiling model. SGD with Nesterov accelerated gradient gives good results for this model sgd = SGD (lr=0.01, decay=1-6, momentum=0.9, nesterov=true) model.compile (loss=’categorical_crossentropy’, optimizer=sgd, metrics=) #Training and saving the model hist = model.fit (np.array (train_x), np.array (train_y), epochs=200, batch_size=5, verbose=1) model.save (‘chatbot_model.h5’, hist) print (“model is created”)

Our model is ready to chat.

Now let’s create a nice GUI for our chatbot in a new file.

You can name the file as gui_chatbot.py

In our GUI file, we will use the Tkinter module to build the structure of the desktop application. Then we will capture the user’s message and redo some pre-processes before entering the message into our trained template.

The template will then predict the tag of the user’s message. Then we will randomly select the answer from the list des réponses de notre fichier d’intentions ou d’« intents ».

Voici le code source complet du fichier GUI.

Python import nltk from nltk.stem import WordNetLemmatizer lemmatizer = WordNetLemmatizer() import pickle import numpy as np from keras.models import load_model model = load_model(‘chatbot_model.h5’) import json import random intents = json.loads(open(‘intents.json’).read()) words = pickle.load(open(‘words.pkl’,’rb’)) classes = pickle.load(open(‘classes.pkl’,’rb’)) def clean_up_sentence(sentence): # tokenize the pattern – splitting words into array sentence_words = nltk.word_tokenize(sentence) # stemming every word – reducing to base form sentence_words = return sentence_words # return bag of words array: 0 or 1 for words that exist in sentence def bag_of_words(sentence, words, show_details=True): # tokenizing patterns sentence_words = clean_up_sentence(sentence) # bag of words – vocabulary matrix bag = *len(words) for s in sentence_words: for i,word in enumerate(words): if word == s: # assign 1 if current word is in the vocabulary position bag = 1 if show_details: print (“found in bag: %s” % word) return(np.array(bag)) def predict_class(sentence): # filter below threshold predictions p = bag_of_words(sentence, words,show_details=False) res = model.predict(np.array()) ERROR_THRESHOLD = 0.25 results = for i,r in enumerate(res) if r>

ERROR_THRESHOLD] # sorting strength probability results.sort(key=lambda x: x, reverse=True) return_list = for r in results: return_list.append({“intent”: classes], “probability”: str(r)}) return return_list def getResponse(ints, intents_json): tag = ints list_of_intents = intents_json for i in list_of_intents: if(i== tag): result = random.choice(i) break return result #Creating tkinter GUI import tkinter from tkinter import * def send(): msg = EntryBox.get(“1.0”,’end-1c’).strip() EntryBox.delete(“0.0″,END) if msg != ”: ChatBox.config(state=NORMAL) ChatBox.insert(END, “You: ” msg ‘nn’) ChatBox.config(foreground=”#446665″, font=(“Verdana”, 12 )) ints = predict_class(msg) res = getResponse(ints, intents) ChatBox.insert(END, “Bot: ” res ‘nn’) ChatBox.config(state=DISABLED) ChatBox.yview(END) root = Tk() root.title(“Chatbot”) root.geometry(“400×500″) root.resizable(width=FALSE, height=FALSE) #Create Chat window ChatBox = Text(root, bd=0, bg=”white”, height=”8″, width=”50″, font=”Arial”,) ChatBox.config(state=DISABLED) #Bind scrollbar to Chat window scrollbar = Scrollbar(root, command=ChatBox.yview, cursor=”heart”) ChatBox = scrollbar.set #Create Button to send message sendButton = Button (root, font= (“Verdana” ,12, ‘bold’), text=”send”, width=”12″, height=5, bd=0, bg=” #f9a602 “, activebackground=” #3c9d9b “, fg=’ #000000 ‘, command= send) #Create the box to enter message EntryBox = Text (root, bd=0, bg=”white”, width=”29″, height=”5″, font=”Arial”) #EntryBox .bind (“”, send) #Place all components on the screen scrollbar.place (x=376, y=6, height=386) ChatBox.place (x=6, y=6, height=386, width=370) entryBox.place (x=128, y=401, height=90, width=265) SendButton.Place (x=6, y=401, height=90) root.mainloop () Discover more Python projects with source code.

Run the chatbot

We now have two separate files, one is the train_chatbot.py, which we will first use to form the template.

Additional resources: python train_chatbot.py

  • 4 Apps professionals for natural language processing or Natural Language Processing (NLP)
  • Build a chatbot with an expert system

RELATED POSTS