• DOMAIN: Customer support
• CONTEXT: Great Learning has a an academic support department which receives numerous support requests every day throughout the year. Teams are spread across geographies and try to provide support round the year. Sometimes there are circumstances where due to heavy workload certain request resolutions are delayed, impacting company’s business. Some of the requests are very generic where a proper resolution procedure delivered to the user can solve the problem. Company is looking forward to design an automation which can interact with the user, understand the problem and display the resolution procedure [ if found as a generic request ] or redirect the request to an actual human support executive if the request is complex or not in it’s database.
#importing the libraries
import tensorflow as tf
import numpy as np
import pandas as pd
import json
import nltk
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.layers import Input,Embedding, LSTM, Dense, GlobalMaxPooling1D, Flatten
import matplotlib.pyplot as plt
%%writefile content.json
{"intents": [
{"tag": "Intro",
"patterns": ["hi",
"how are you",
"is anyone there",
"hello",
"whats up",
"hey",
"yo",
"listen",
"please help me",
"i am learner from",
"i belong to",
"aiml batch",
"aifl batch",
"i am from",
"my pm is",
"blended",
"online",
"i am from",
"hey ya",
"talking to you for first time"],
"responses": ["Hello! how can i help you ?"],
"context_set": ""
},
{"tag": "Exit",
"patterns": ["thank you",
"thanks",
"cya",
"Bye",
"see you",
"later",
"see you later",
"goodbye",
"i am leaving",
"have a Good day",
"you helped me",
"thanks a lot",
"thanks a ton",
"you are the best",
"great help",
"too good",
"you are a good learning buddy"],
"responses": ["I hope I was able to assist you, Good Bye"],
"context_set": ""
},
{"tag": "Olympus",
"patterns": ["olympus",
"explain me how olympus works",
"I am not able to understand olympus",
"olympus window not working",
"no access to olympus",
"unable to see link in olympus",
"no link visible on olympus",
"whom to contact for olympus",
"lot of problem with olympus",
"olypus is not a good tool",
"lot of problems with olympus",
"how to use olympus",
"teach me olympus"],
"responses": ["Link: Olympus wiki"],
"context_set": ""
},
{"tag": "SL",
"patterns": ["i am not able to understand svm",
"explain me how machine learning works",
"i am not able to understand naive bayes",
"i am not able to understand logistic regression",
"i am not able to understand ensemble techb=niques",
"i am not able to understand knn",
"i am not able to understand knn imputer",
"i am not able to understand cross validation",
"i am not able to understand boosting",
"i am not able to understand random forest",
"i am not able to understand ada boosting",
"i am not able to understand gradient boosting",
"machine learning",
"ML",
"SL",
"supervised learning",
"knn",
"logistic regression",
"regression",
"classification",
"naive bayes",
"nb",
"ensemble techniques",
"bagging",
"boosting",
"ada boosting",
"ada",
"gradient boosting",
"hyper parameters"],
"responses": ["Link: Machine Learning wiki "],
"context_set": ""
},
{"tag": "NN",
"patterns": ["what is deep learning",
"unable to understand deep learning",
"explain me how deep learning works",
"i am not able to understand deep learning",
"not able to understand neural nets",
"very diffult to understand neural nets",
"unable to understand neural nets",
"ann",
"artificial intelligence",
"artificial neural networks",
"weights",
"activation function",
"hidden layers",
"softmax",
"sigmoid",
"relu",
"otimizer",
"forward propagation",
"backward propagation",
"epochs",
"epoch",
"what is an epoch",
"adam",
"sgd"],
"responses": ["Link: Neural Nets wiki"],
"context_set": ""
},
{"tag": "Bot",
"patterns": ["what is your name",
"who are you",
"name please",
"when are your hours of opertions",
"what are your working hours",
"hours of operation",
"working hours",
"hours"],
"responses": ["I am your virtual learning assistant"],
"context_set": ""
},
{"tag": "Profane",
"patterns": ["what the hell",
"bloody stupid bot",
"do you think you are very smart",
"screw you",
"i hate you",
"you are stupid",
"jerk",
"you are a joke",
"useless piece of shit"],
"responses": ["Please use respectful words"],
"context_set": ""
},
{"tag": "Ticket",
"patterns": ["my problem is not solved",
"you did not help me",
"not a good solution",
"bad solution",
"not good solution",
"no help",
"wasted my time",
"useless bot",
"create a ticket"],
"responses": ["Tarnsferring the request to your PM"],
"context_set": ""
}
]
}
Writing content.json
#importing the dataset
with open('content.json') as content:
Data_Collection =json.load(content)
#Printing the dataset
Data_Collection
{'intents': [{'tag': 'Intro', 'patterns': ['hi', 'how are you', 'is anyone there', 'hello', 'whats up', 'hey', 'yo', 'listen', 'please help me', 'i am learner from', 'i belong to', 'aiml batch', 'aifl batch', 'i am from', 'my pm is', 'blended', 'online', 'i am from', 'hey ya', 'talking to you for first time'], 'responses': ['Hello! how can i help you ?'], 'context_set': ''}, {'tag': 'Exit', 'patterns': ['thank you', 'thanks', 'cya', 'Bye', 'see you', 'later', 'see you later', 'goodbye', 'i am leaving', 'have a Good day', 'you helped me', 'thanks a lot', 'thanks a ton', 'you are the best', 'great help', 'too good', 'you are a good learning buddy'], 'responses': ['I hope I was able to assist you, Good Bye'], 'context_set': ''}, {'tag': 'Olympus', 'patterns': ['olympus', 'explain me how olympus works', 'I am not able to understand olympus', 'olympus window not working', 'no access to olympus', 'unable to see link in olympus', 'no link visible on olympus', 'whom to contact for olympus', 'lot of problem with olympus', 'olypus is not a good tool', 'lot of problems with olympus', 'how to use olympus', 'teach me olympus'], 'responses': ['Link: Olympus wiki'], 'context_set': ''}, {'tag': 'SL', 'patterns': ['i am not able to understand svm', 'explain me how machine learning works', 'i am not able to understand naive bayes', 'i am not able to understand logistic regression', 'i am not able to understand ensemble techb=niques', 'i am not able to understand knn', 'i am not able to understand knn imputer', 'i am not able to understand cross validation', 'i am not able to understand boosting', 'i am not able to understand random forest', 'i am not able to understand ada boosting', 'i am not able to understand gradient boosting', 'machine learning', 'ML', 'SL', 'supervised learning', 'knn', 'logistic regression', 'regression', 'classification', 'naive bayes', 'nb', 'ensemble techniques', 'bagging', 'boosting', 'ada boosting', 'ada', 'gradient boosting', 'hyper parameters'], 'responses': ['Link: Machine Learning wiki '], 'context_set': ''}, {'tag': 'NN', 'patterns': ['what is deep learning', 'unable to understand deep learning', 'explain me how deep learning works', 'i am not able to understand deep learning', 'not able to understand neural nets', 'very diffult to understand neural nets', 'unable to understand neural nets', 'ann', 'artificial intelligence', 'artificial neural networks', 'weights', 'activation function', 'hidden layers', 'softmax', 'sigmoid', 'relu', 'otimizer', 'forward propagation', 'backward propagation', 'epochs', 'epoch', 'what is an epoch', 'adam', 'sgd'], 'responses': ['Link: Neural Nets wiki'], 'context_set': ''}, {'tag': 'Bot', 'patterns': ['what is your name', 'who are you', 'name please', 'when are your hours of opertions', 'what are your working hours', 'hours of operation', 'working hours', 'hours'], 'responses': ['I am your virtual learning assistant'], 'context_set': ''}, {'tag': 'Profane', 'patterns': ['what the hell', 'bloody stupid bot', 'do you think you are very smart', 'screw you', 'i hate you', 'you are stupid', 'jerk', 'you are a joke', 'useless piece of shit'], 'responses': ['Please use respectful words'], 'context_set': ''}, {'tag': 'Ticket', 'patterns': ['my problem is not solved', 'you did not help me', 'not a good solution', 'bad solution', 'not good solution', 'no help', 'wasted my time', 'useless bot', 'create a ticket'], 'responses': ['Tarnsferring the request to your PM'], 'context_set': ''}]}
#getting all the data to lists
# tags = []
# inputs = []
# responses={}
# for intent in data1['intents']:
# responses[intent['tag']]=intent['responses']
# for lines in intent['patterns']:
# inputs.append(lines)
# tags.append(intent['tag'])
tags = []
inputs = []
responses = {}
for intent in Data_Collection['intents']:
tag = intent['tag']
responses[tag] = intent['responses']
for line in intent['patterns']:
inputs.append(line)
tags.append(tag)
#converting to dataframe
Chatbot_Data = pd.DataFrame({"inputs":inputs,
"tags":tags})
#printing data
Chatbot_Data
inputs | tags | |
---|---|---|
0 | hi | Intro |
1 | how are you | Intro |
2 | is anyone there | Intro |
3 | hello | Intro |
4 | whats up | Intro |
... | ... | ... |
124 | not good solution | Ticket |
125 | no help | Ticket |
126 | wasted my time | Ticket |
127 | useless bot | Ticket |
128 | create a ticket | Ticket |
129 rows × 2 columns
Chatbot_Data.shape
(129, 2)
Chatbot_Data= Chatbot_Data.sample(frac=1)
#Setting frac=1 means you are sampling the entire DataFrame, effectively shuffling all its rows. This is a common technique to shuffle the order of rows in a DataFrame.
Pre-processing is a crucial step that involves tasks such as eliminating punctuation, converting text to lowercase, and encoding textual data into numerical representations.
#Removinng Punctuations
import string
Chatbot_Data['inputs'] = data['inputs'].apply(lambda wrd:[ltrs.lower() for ltrs in wrd if ltrs not in string.punctuation])
Chatbot_Data['inputs'] = data['inputs'].apply(lambda wrd: ''.join(wrd))
Chatbot_Data
inputs | tags | |
---|---|---|
87 | artificial intelligence | NN |
89 | weights | NN |
13 | i am from | Intro |
28 | i am leaving | Exit |
64 | SL | SL |
... | ... | ... |
39 | I am not able to understand olympus | Olympus |
12 | aifl batch | Intro |
30 | you helped me | Exit |
127 | useless bot | Ticket |
95 | otimizer | NN |
129 rows × 2 columns
#takenize the data
from tensorflow.keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer(num_words = 2000)
tokenizer.fit_on_texts(Chatbot_Data['inputs'])
train = tokenizer.texts_to_sequences(Chatbot_Data['inputs'])
#appply padding
from tensorflow.keras.preprocessing.sequence import pad_sequences
x_train = pad_sequences(train)
#encoding the outputs
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
y_train = le.fit_transform(Chatbot_Data['tags'])
Utilizing TensorFlow's Tokenizer facilitates the conversion of textual data into sequences of integers, a crucial preprocessing step for natural language processing tasks. Following this, the application of pad_sequences becomes essential to standardize the sequence lengths, ensuring uniform input dimensions, which is particularly significant when feeding data into neural networks.
To complement the text preprocessing, scikit-learn's LabelEncoder is employed to encode the output labels or tags. This process transforms textual labels into numerical representations, enabling their utilization in machine learning models for efficient training and prediction.
input_shape = x_train.shape[1]
print(input_shape)
#So, input_shape will be equal to the length of the longest sequence in your x_train
9
#define vocabulary
vocabulary = len(tokenizer.word_index)
print("total no. of unique words: ", vocabulary)
output_length = le.classes_.shape[0]
print("output length: ", output_length)
total no. of unique words: 162 output length: 8
Neural Network¶
from tensorflow.keras.layers import Input, Embedding, LSTM, Flatten, Dense
from tensorflow.keras.models import Model # Import Model from tensorflow.keras.models
from re import X
#creating the moddel
i = Input(shape=(input_shape,))
x = Embedding(vocabulary+1,10)(i)
x = LSTM(10,return_sequences=True)(x)
x = Flatten()(x)
x = Dense(output_length,activation="softmax")(x)
model = Model(i,x)
#compiling the model
model.compile(loss="sparse_categorical_crossentropy", optimizer='adam',metrics=['accuracy'])
#training the model
train = model.fit(x_train,y_train,epochs=200)
Epoch 1/200 5/5 [==============================] - 4s 11ms/step - loss: 2.0796 - accuracy: 0.0853 Epoch 2/200 5/5 [==============================] - 0s 6ms/step - loss: 2.0739 - accuracy: 0.2326 Epoch 3/200 5/5 [==============================] - 0s 8ms/step - loss: 2.0692 - accuracy: 0.2403 Epoch 4/200 5/5 [==============================] - 0s 7ms/step - loss: 2.0644 - accuracy: 0.2248 Epoch 5/200 5/5 [==============================] - 0s 7ms/step - loss: 2.0585 - accuracy: 0.2248 Epoch 6/200 5/5 [==============================] - 0s 8ms/step - loss: 2.0521 - accuracy: 0.2248 Epoch 7/200 5/5 [==============================] - 0s 7ms/step - loss: 2.0456 - accuracy: 0.2248 Epoch 8/200 5/5 [==============================] - 0s 7ms/step - loss: 2.0378 - accuracy: 0.2248 Epoch 9/200 5/5 [==============================] - 0s 8ms/step - loss: 2.0288 - accuracy: 0.2248 Epoch 10/200 5/5 [==============================] - 0s 6ms/step - loss: 2.0185 - accuracy: 0.2248 Epoch 11/200 5/5 [==============================] - 0s 6ms/step - loss: 2.0090 - accuracy: 0.2248 Epoch 12/200 5/5 [==============================] - 0s 7ms/step - loss: 2.0005 - accuracy: 0.2248 Epoch 13/200 5/5 [==============================] - 0s 6ms/step - loss: 1.9916 - accuracy: 0.2248 Epoch 14/200 5/5 [==============================] - 0s 7ms/step - loss: 1.9841 - accuracy: 0.2248 Epoch 15/200 5/5 [==============================] - 0s 6ms/step - loss: 1.9756 - accuracy: 0.2248 Epoch 16/200 5/5 [==============================] - 0s 7ms/step - loss: 1.9678 - accuracy: 0.2248 Epoch 17/200 5/5 [==============================] - 0s 6ms/step - loss: 1.9576 - accuracy: 0.2248 Epoch 18/200 5/5 [==============================] - 0s 6ms/step - loss: 1.9445 - accuracy: 0.2248 Epoch 19/200 5/5 [==============================] - 0s 7ms/step - loss: 1.9301 - accuracy: 0.2248 Epoch 20/200 5/5 [==============================] - 0s 6ms/step - loss: 1.9158 - accuracy: 0.2248 Epoch 21/200 5/5 [==============================] - 0s 6ms/step - loss: 1.9065 - accuracy: 0.2248 Epoch 22/200 5/5 [==============================] - 0s 7ms/step - loss: 1.8972 - accuracy: 0.2248 Epoch 23/200 5/5 [==============================] - 0s 7ms/step - loss: 1.8884 - accuracy: 0.2248 Epoch 24/200 5/5 [==============================] - 0s 7ms/step - loss: 1.8798 - accuracy: 0.2248 Epoch 25/200 5/5 [==============================] - 0s 7ms/step - loss: 1.8703 - accuracy: 0.2248 Epoch 26/200 5/5 [==============================] - 0s 7ms/step - loss: 1.8566 - accuracy: 0.2326 Epoch 27/200 5/5 [==============================] - 0s 7ms/step - loss: 1.8392 - accuracy: 0.2326 Epoch 28/200 5/5 [==============================] - 0s 6ms/step - loss: 1.8249 - accuracy: 0.2326 Epoch 29/200 5/5 [==============================] - 0s 8ms/step - loss: 1.8112 - accuracy: 0.2326 Epoch 30/200 5/5 [==============================] - 0s 6ms/step - loss: 1.7984 - accuracy: 0.2326 Epoch 31/200 5/5 [==============================] - 0s 8ms/step - loss: 1.7849 - accuracy: 0.2326 Epoch 32/200 5/5 [==============================] - 0s 6ms/step - loss: 1.7739 - accuracy: 0.2326 Epoch 33/200 5/5 [==============================] - 0s 6ms/step - loss: 1.7627 - accuracy: 0.2326 Epoch 34/200 5/5 [==============================] - 0s 6ms/step - loss: 1.7534 - accuracy: 0.2403 Epoch 35/200 5/5 [==============================] - 0s 6ms/step - loss: 1.7428 - accuracy: 0.2558 Epoch 36/200 5/5 [==============================] - 0s 6ms/step - loss: 1.7340 - accuracy: 0.2403 Epoch 37/200 5/5 [==============================] - 0s 6ms/step - loss: 1.7260 - accuracy: 0.2403 Epoch 38/200 5/5 [==============================] - 0s 7ms/step - loss: 1.7145 - accuracy: 0.2558 Epoch 39/200 5/5 [==============================] - 0s 6ms/step - loss: 1.7045 - accuracy: 0.2868 Epoch 40/200 5/5 [==============================] - 0s 6ms/step - loss: 1.6987 - accuracy: 0.3101 Epoch 41/200 5/5 [==============================] - 0s 8ms/step - loss: 1.6930 - accuracy: 0.3566 Epoch 42/200 5/5 [==============================] - 0s 7ms/step - loss: 1.6840 - accuracy: 0.3643 Epoch 43/200 5/5 [==============================] - 0s 6ms/step - loss: 1.6720 - accuracy: 0.3566 Epoch 44/200 5/5 [==============================] - 0s 7ms/step - loss: 1.6608 - accuracy: 0.3643 Epoch 45/200 5/5 [==============================] - 0s 6ms/step - loss: 1.6522 - accuracy: 0.3643 Epoch 46/200 5/5 [==============================] - 0s 7ms/step - loss: 1.6460 - accuracy: 0.4651 Epoch 47/200 5/5 [==============================] - 0s 7ms/step - loss: 1.6375 - accuracy: 0.5271 Epoch 48/200 5/5 [==============================] - 0s 8ms/step - loss: 1.6275 - accuracy: 0.5194 Epoch 49/200 5/5 [==============================] - 0s 6ms/step - loss: 1.6211 - accuracy: 0.4574 Epoch 50/200 5/5 [==============================] - 0s 6ms/step - loss: 1.6194 - accuracy: 0.4806 Epoch 51/200 5/5 [==============================] - 0s 8ms/step - loss: 1.6123 - accuracy: 0.4729 Epoch 52/200 5/5 [==============================] - 0s 7ms/step - loss: 1.5942 - accuracy: 0.5039 Epoch 53/200 5/5 [==============================] - 0s 7ms/step - loss: 1.5793 - accuracy: 0.5116 Epoch 54/200 5/5 [==============================] - 0s 7ms/step - loss: 1.5743 - accuracy: 0.4186 Epoch 55/200 5/5 [==============================] - 0s 8ms/step - loss: 1.5794 - accuracy: 0.3256 Epoch 56/200 5/5 [==============================] - 0s 6ms/step - loss: 1.5702 - accuracy: 0.3256 Epoch 57/200 5/5 [==============================] - 0s 7ms/step - loss: 1.5510 - accuracy: 0.3798 Epoch 58/200 5/5 [==============================] - 0s 7ms/step - loss: 1.5333 - accuracy: 0.5504 Epoch 59/200 5/5 [==============================] - 0s 7ms/step - loss: 1.5176 - accuracy: 0.5659 Epoch 60/200 5/5 [==============================] - 0s 6ms/step - loss: 1.5058 - accuracy: 0.5581 Epoch 61/200 5/5 [==============================] - 0s 6ms/step - loss: 1.4978 - accuracy: 0.5891 Epoch 62/200 5/5 [==============================] - 0s 7ms/step - loss: 1.4894 - accuracy: 0.5349 Epoch 63/200 5/5 [==============================] - 0s 7ms/step - loss: 1.4817 - accuracy: 0.5194 Epoch 64/200 5/5 [==============================] - 0s 7ms/step - loss: 1.4675 - accuracy: 0.5039 Epoch 65/200 5/5 [==============================] - 0s 7ms/step - loss: 1.4550 - accuracy: 0.4961 Epoch 66/200 5/5 [==============================] - 0s 8ms/step - loss: 1.4435 - accuracy: 0.5116 Epoch 67/200 5/5 [==============================] - 0s 7ms/step - loss: 1.4323 - accuracy: 0.5271 Epoch 68/200 5/5 [==============================] - 0s 7ms/step - loss: 1.4190 - accuracy: 0.5349 Epoch 69/200 5/5 [==============================] - 0s 9ms/step - loss: 1.4067 - accuracy: 0.5426 Epoch 70/200 5/5 [==============================] - 0s 7ms/step - loss: 1.3975 - accuracy: 0.5581 Epoch 71/200 5/5 [==============================] - 0s 8ms/step - loss: 1.3825 - accuracy: 0.5581 Epoch 72/200 5/5 [==============================] - 0s 8ms/step - loss: 1.3727 - accuracy: 0.5349 Epoch 73/200 5/5 [==============================] - 0s 7ms/step - loss: 1.3617 - accuracy: 0.5426 Epoch 74/200 5/5 [==============================] - 0s 8ms/step - loss: 1.3488 - accuracy: 0.5426 Epoch 75/200 5/5 [==============================] - 0s 9ms/step - loss: 1.3354 - accuracy: 0.5504 Epoch 76/200 5/5 [==============================] - 0s 11ms/step - loss: 1.3218 - accuracy: 0.5736 Epoch 77/200 5/5 [==============================] - 0s 10ms/step - loss: 1.3111 - accuracy: 0.5581 Epoch 78/200 5/5 [==============================] - 0s 11ms/step - loss: 1.2992 - accuracy: 0.5581 Epoch 79/200 5/5 [==============================] - 0s 9ms/step - loss: 1.2884 - accuracy: 0.5426 Epoch 80/200 5/5 [==============================] - 0s 9ms/step - loss: 1.2762 - accuracy: 0.5349 Epoch 81/200 5/5 [==============================] - 0s 10ms/step - loss: 1.2632 - accuracy: 0.5426 Epoch 82/200 5/5 [==============================] - 0s 10ms/step - loss: 1.2504 - accuracy: 0.5504 Epoch 83/200 5/5 [==============================] - 0s 8ms/step - loss: 1.2423 - accuracy: 0.5581 Epoch 84/200 5/5 [==============================] - 0s 8ms/step - loss: 1.2254 - accuracy: 0.5659 Epoch 85/200 5/5 [==============================] - 0s 11ms/step - loss: 1.2130 - accuracy: 0.5814 Epoch 86/200 5/5 [==============================] - 0s 11ms/step - loss: 1.1986 - accuracy: 0.5814 Epoch 87/200 5/5 [==============================] - 0s 10ms/step - loss: 1.1851 - accuracy: 0.6047 Epoch 88/200 5/5 [==============================] - 0s 10ms/step - loss: 1.1721 - accuracy: 0.6124 Epoch 89/200 5/5 [==============================] - 0s 11ms/step - loss: 1.1615 - accuracy: 0.6124 Epoch 90/200 5/5 [==============================] - 0s 11ms/step - loss: 1.1498 - accuracy: 0.6124 Epoch 91/200 5/5 [==============================] - 0s 11ms/step - loss: 1.1407 - accuracy: 0.6357 Epoch 92/200 5/5 [==============================] - 0s 9ms/step - loss: 1.1268 - accuracy: 0.6589 Epoch 93/200 5/5 [==============================] - 0s 10ms/step - loss: 1.1157 - accuracy: 0.6667 Epoch 94/200 5/5 [==============================] - 0s 10ms/step - loss: 1.1001 - accuracy: 0.6977 Epoch 95/200 5/5 [==============================] - 0s 11ms/step - loss: 1.0839 - accuracy: 0.7519 Epoch 96/200 5/5 [==============================] - 0s 10ms/step - loss: 1.0867 - accuracy: 0.6977 Epoch 97/200 5/5 [==============================] - 0s 10ms/step - loss: 1.0639 - accuracy: 0.7597 Epoch 98/200 5/5 [==============================] - 0s 9ms/step - loss: 1.0523 - accuracy: 0.7442 Epoch 99/200 5/5 [==============================] - 0s 9ms/step - loss: 1.0429 - accuracy: 0.7364 Epoch 100/200 5/5 [==============================] - 0s 9ms/step - loss: 1.0212 - accuracy: 0.7674 Epoch 101/200 5/5 [==============================] - 0s 9ms/step - loss: 1.0071 - accuracy: 0.7597 Epoch 102/200 5/5 [==============================] - 0s 9ms/step - loss: 0.9959 - accuracy: 0.7364 Epoch 103/200 5/5 [==============================] - 0s 9ms/step - loss: 1.0016 - accuracy: 0.7054 Epoch 104/200 5/5 [==============================] - 0s 9ms/step - loss: 0.9956 - accuracy: 0.7054 Epoch 105/200 5/5 [==============================] - 0s 9ms/step - loss: 0.9660 - accuracy: 0.7287 Epoch 106/200 5/5 [==============================] - 0s 10ms/step - loss: 0.9522 - accuracy: 0.7752 Epoch 107/200 5/5 [==============================] - 0s 11ms/step - loss: 0.9385 - accuracy: 0.7752 Epoch 108/200 5/5 [==============================] - 0s 11ms/step - loss: 0.9305 - accuracy: 0.7752 Epoch 109/200 5/5 [==============================] - 0s 11ms/step - loss: 0.9152 - accuracy: 0.7597 Epoch 110/200 5/5 [==============================] - 0s 9ms/step - loss: 0.9111 - accuracy: 0.7442 Epoch 111/200 5/5 [==============================] - 0s 9ms/step - loss: 0.8984 - accuracy: 0.7519 Epoch 112/200 5/5 [==============================] - 0s 9ms/step - loss: 0.8938 - accuracy: 0.7597 Epoch 113/200 5/5 [==============================] - 0s 8ms/step - loss: 0.8956 - accuracy: 0.7287 Epoch 114/200 5/5 [==============================] - 0s 9ms/step - loss: 0.8690 - accuracy: 0.7674 Epoch 115/200 5/5 [==============================] - 0s 9ms/step - loss: 0.8582 - accuracy: 0.7907 Epoch 116/200 5/5 [==============================] - 0s 9ms/step - loss: 0.8464 - accuracy: 0.7984 Epoch 117/200 5/5 [==============================] - 0s 9ms/step - loss: 0.8376 - accuracy: 0.7829 Epoch 118/200 5/5 [==============================] - 0s 9ms/step - loss: 0.8220 - accuracy: 0.7829 Epoch 119/200 5/5 [==============================] - 0s 9ms/step - loss: 0.8205 - accuracy: 0.7907 Epoch 120/200 5/5 [==============================] - 0s 9ms/step - loss: 0.8091 - accuracy: 0.7907 Epoch 121/200 5/5 [==============================] - 0s 8ms/step - loss: 0.8088 - accuracy: 0.7984 Epoch 122/200 5/5 [==============================] - 0s 9ms/step - loss: 0.7963 - accuracy: 0.8217 Epoch 123/200 5/5 [==============================] - 0s 9ms/step - loss: 0.7741 - accuracy: 0.8140 Epoch 124/200 5/5 [==============================] - 0s 9ms/step - loss: 0.7632 - accuracy: 0.8217 Epoch 125/200 5/5 [==============================] - 0s 9ms/step - loss: 0.7547 - accuracy: 0.8062 Epoch 126/200 5/5 [==============================] - 0s 9ms/step - loss: 0.7459 - accuracy: 0.8217 Epoch 127/200 5/5 [==============================] - 0s 9ms/step - loss: 0.7394 - accuracy: 0.7907 Epoch 128/200 5/5 [==============================] - 0s 9ms/step - loss: 0.7299 - accuracy: 0.7907 Epoch 129/200 5/5 [==============================] - 0s 10ms/step - loss: 0.7198 - accuracy: 0.7984 Epoch 130/200 5/5 [==============================] - 0s 9ms/step - loss: 0.7119 - accuracy: 0.8062 Epoch 131/200 5/5 [==============================] - 0s 9ms/step - loss: 0.7039 - accuracy: 0.8217 Epoch 132/200 5/5 [==============================] - 0s 9ms/step - loss: 0.6962 - accuracy: 0.8295 Epoch 133/200 5/5 [==============================] - 0s 9ms/step - loss: 0.6898 - accuracy: 0.8217 Epoch 134/200 5/5 [==============================] - 0s 9ms/step - loss: 0.6962 - accuracy: 0.8372 Epoch 135/200 5/5 [==============================] - 0s 9ms/step - loss: 0.6933 - accuracy: 0.8372 Epoch 136/200 5/5 [==============================] - 0s 9ms/step - loss: 0.6706 - accuracy: 0.8295 Epoch 137/200 5/5 [==============================] - 0s 9ms/step - loss: 0.6679 - accuracy: 0.8295 Epoch 138/200 5/5 [==============================] - 0s 8ms/step - loss: 0.6589 - accuracy: 0.8372 Epoch 139/200 5/5 [==============================] - 0s 9ms/step - loss: 0.6541 - accuracy: 0.8217 Epoch 140/200 5/5 [==============================] - 0s 10ms/step - loss: 0.6499 - accuracy: 0.8217 Epoch 141/200 5/5 [==============================] - 0s 11ms/step - loss: 0.6420 - accuracy: 0.8295 Epoch 142/200 5/5 [==============================] - 0s 10ms/step - loss: 0.6322 - accuracy: 0.8372 Epoch 143/200 5/5 [==============================] - 0s 10ms/step - loss: 0.6252 - accuracy: 0.8372 Epoch 144/200 5/5 [==============================] - 0s 12ms/step - loss: 0.6183 - accuracy: 0.8372 Epoch 145/200 5/5 [==============================] - 0s 9ms/step - loss: 0.6119 - accuracy: 0.8372 Epoch 146/200 5/5 [==============================] - 0s 10ms/step - loss: 0.6039 - accuracy: 0.8372 Epoch 147/200 5/5 [==============================] - 0s 10ms/step - loss: 0.5983 - accuracy: 0.8372 Epoch 148/200 5/5 [==============================] - 0s 10ms/step - loss: 0.5931 - accuracy: 0.8372 Epoch 149/200 5/5 [==============================] - 0s 10ms/step - loss: 0.5850 - accuracy: 0.8372 Epoch 150/200 5/5 [==============================] - 0s 11ms/step - loss: 0.5815 - accuracy: 0.8450 Epoch 151/200 5/5 [==============================] - 0s 11ms/step - loss: 0.5736 - accuracy: 0.8605 Epoch 152/200 5/5 [==============================] - 0s 9ms/step - loss: 0.5797 - accuracy: 0.8527 Epoch 153/200 5/5 [==============================] - 0s 11ms/step - loss: 0.5733 - accuracy: 0.8605 Epoch 154/200 5/5 [==============================] - 0s 9ms/step - loss: 0.5618 - accuracy: 0.8915 Epoch 155/200 5/5 [==============================] - 0s 11ms/step - loss: 0.5636 - accuracy: 0.8682 Epoch 156/200 5/5 [==============================] - 0s 8ms/step - loss: 0.5601 - accuracy: 0.8682 Epoch 157/200 5/5 [==============================] - 0s 7ms/step - loss: 0.5535 - accuracy: 0.8605 Epoch 158/200 5/5 [==============================] - 0s 7ms/step - loss: 0.5395 - accuracy: 0.8527 Epoch 159/200 5/5 [==============================] - 0s 9ms/step - loss: 0.5329 - accuracy: 0.8682 Epoch 160/200 5/5 [==============================] - 0s 7ms/step - loss: 0.5279 - accuracy: 0.8837 Epoch 161/200 5/5 [==============================] - 0s 7ms/step - loss: 0.5263 - accuracy: 0.9070 Epoch 162/200 5/5 [==============================] - 0s 8ms/step - loss: 0.5221 - accuracy: 0.8992 Epoch 163/200 5/5 [==============================] - 0s 7ms/step - loss: 0.5118 - accuracy: 0.8837 Epoch 164/200 5/5 [==============================] - 0s 8ms/step - loss: 0.5069 - accuracy: 0.8837 Epoch 165/200 5/5 [==============================] - 0s 8ms/step - loss: 0.5200 - accuracy: 0.9070 Epoch 166/200 5/5 [==============================] - 0s 7ms/step - loss: 0.5320 - accuracy: 0.8992 Epoch 167/200 5/5 [==============================] - 0s 7ms/step - loss: 0.4965 - accuracy: 0.8992 Epoch 168/200 5/5 [==============================] - 0s 7ms/step - loss: 0.4936 - accuracy: 0.9147 Epoch 169/200 5/5 [==============================] - 0s 7ms/step - loss: 0.4912 - accuracy: 0.9147 Epoch 170/200 5/5 [==============================] - 0s 6ms/step - loss: 0.4829 - accuracy: 0.9147 Epoch 171/200 5/5 [==============================] - 0s 7ms/step - loss: 0.4819 - accuracy: 0.8915 Epoch 172/200 5/5 [==============================] - 0s 7ms/step - loss: 0.4748 - accuracy: 0.9070 Epoch 173/200 5/5 [==============================] - 0s 7ms/step - loss: 0.4666 - accuracy: 0.9225 Epoch 174/200 5/5 [==============================] - 0s 7ms/step - loss: 0.4653 - accuracy: 0.9147 Epoch 175/200 5/5 [==============================] - 0s 6ms/step - loss: 0.4626 - accuracy: 0.9225 Epoch 176/200 5/5 [==============================] - 0s 8ms/step - loss: 0.4553 - accuracy: 0.9380 Epoch 177/200 5/5 [==============================] - 0s 7ms/step - loss: 0.4483 - accuracy: 0.9302 Epoch 178/200 5/5 [==============================] - 0s 8ms/step - loss: 0.4442 - accuracy: 0.9380 Epoch 179/200 5/5 [==============================] - 0s 8ms/step - loss: 0.4408 - accuracy: 0.9380 Epoch 180/200 5/5 [==============================] - 0s 6ms/step - loss: 0.4355 - accuracy: 0.9457 Epoch 181/200 5/5 [==============================] - 0s 7ms/step - loss: 0.4323 - accuracy: 0.9457 Epoch 182/200 5/5 [==============================] - 0s 7ms/step - loss: 0.4275 - accuracy: 0.9535 Epoch 183/200 5/5 [==============================] - 0s 7ms/step - loss: 0.4247 - accuracy: 0.9535 Epoch 184/200 5/5 [==============================] - 0s 7ms/step - loss: 0.4229 - accuracy: 0.9535 Epoch 185/200 5/5 [==============================] - 0s 7ms/step - loss: 0.4182 - accuracy: 0.9535 Epoch 186/200 5/5 [==============================] - 0s 9ms/step - loss: 0.4152 - accuracy: 0.9457 Epoch 187/200 5/5 [==============================] - 0s 7ms/step - loss: 0.4140 - accuracy: 0.9535 Epoch 188/200 5/5 [==============================] - 0s 7ms/step - loss: 0.4078 - accuracy: 0.9535 Epoch 189/200 5/5 [==============================] - 0s 7ms/step - loss: 0.4016 - accuracy: 0.9612 Epoch 190/200 5/5 [==============================] - 0s 7ms/step - loss: 0.4020 - accuracy: 0.9690 Epoch 191/200 5/5 [==============================] - 0s 7ms/step - loss: 0.4175 - accuracy: 0.9612 Epoch 192/200 5/5 [==============================] - 0s 6ms/step - loss: 0.4025 - accuracy: 0.9535 Epoch 193/200 5/5 [==============================] - 0s 7ms/step - loss: 0.4911 - accuracy: 0.9070 Epoch 194/200 5/5 [==============================] - 0s 7ms/step - loss: 0.5911 - accuracy: 0.8295 Epoch 195/200 5/5 [==============================] - 0s 7ms/step - loss: 0.4038 - accuracy: 0.9302 Epoch 196/200 5/5 [==============================] - 0s 7ms/step - loss: 0.3940 - accuracy: 0.9767 Epoch 197/200 5/5 [==============================] - 0s 7ms/step - loss: 0.3899 - accuracy: 0.9612 Epoch 198/200 5/5 [==============================] - 0s 8ms/step - loss: 0.3742 - accuracy: 0.9690 Epoch 199/200 5/5 [==============================] - 0s 7ms/step - loss: 0.3785 - accuracy: 0.9690 Epoch 200/200 5/5 [==============================] - 0s 7ms/step - loss: 0.3762 - accuracy: 0.9690
Model Analysis¶
#plotting model accuracg
plt.plot(train.history['accuracy'],label='training set accuraccy')
plt.plot(train.history['loss'],label='training set loss')
plt.legend()
<matplotlib.legend.Legend at 0x7c9aa4ac0400>
#Chatting
import random
while True:
texts_p = []
prediction_input = input('You : ')
#removing punctuation and converting to lowercase
prediction_input = [letters.lower() for letters in prediction_input if letters not in string.punctuation]
prediction_input = ''.join(prediction_input)
texts_p.append(prediction_input)
#tokenizing and padding
prediction_input = tokenizer.texts_to_sequences(texts_p)
prediction_input = np.array(prediction_input).reshape(-1)
prediction_input = pad_sequences([prediction_input],input_shape)
#getting output from model
output = model.predict(prediction_input)
output = output.argmax()
#finding the right tag and predicting
response_tag = le.inverse_transform([output])[0]
print("GL Bot : ",random.choice(responses[response_tag]))
if response_tag == "Exit":
break
You : hey 1/1 [==============================] - 1s 638ms/step GL Bot : Hello! how can i help you ? You : what is your name 1/1 [==============================] - 0s 22ms/step GL Bot : I am your virtual learning assistant You : what is deep learning 1/1 [==============================] - 0s 37ms/step GL Bot : Link: Neural Nets wiki You : useless bot 1/1 [==============================] - 0s 22ms/step GL Bot : Tarnsferring the request to your PM You : what the hell 1/1 [==============================] - 0s 33ms/step GL Bot : Please use respectful words You : bye 1/1 [==============================] - 0s 22ms/step GL Bot : I hope I was able to assist you, Good Bye
JB