Please note that 'Variables' are now called 'Fields' in Landbot's platform.
A customer-centric company's ever-present problem is gathering user feedback and drawing insights from it. However, most feedback comes in the form of unstructured text, which makes it more difficult to run analytics. If you don’t want to end up wasting the effort put into collecting the data, it is essential to transform unstructured text feedback into structured categories for further analysis.
But how do you process unstructured data efficiently?
This article showcases how the Landbot data team approaches the challenge and organizes the data in a way that benefits the rest of the company.
Our Strategy
Natural language processing (NLP) is a field of computer science and linguistics that deals with the analysis of human language. It typically relies on machine learning (ML) algorithms to learn the patterns in text data in order to identify certain features.
These models can be used to extract insights from customer feedback, such as what users are unhappy with, what they like, and where they think the company can improve.
At Landbot, we have implemented a semi-automated feedback categorization system to understand our customer needs and pain points better.
The Challenges of Unstructured Data Processing
Still, in our case, there are several elements that make this task complex:
- Multilingual data - We have customers on every continent (except for Antarctica). For example: 🇪🇸 “Queria saber si se pueden eliminar bricks”, 🇮🇹 “Utilizzo Landbot per alcuni miei clienti con successo e da loro ho avuto feedback postivi sul risultato finale” or 🇳🇱 “Landbot heeft een erg gebruiksvriendelijke no code / low code interface. Je bent in staat om zonder technische kennis een chatbot live te brengen. Kom je er niet uit? Dan reageert het supportteam altijd heel snel”
- The language is domain-specific - The vocabulary used may not exist outside the domain or may have a different meaning. For example, when a user writes: “It would be nice that human operators can download iOS and Android apps where they can chat with the users”, the words “human operators” have a very specific semantic that doesn’t apply outside Landbot.
- Often there’s a lack of context about the origin of the feedback - We may not know the background of the person providing the feedback or her issues with the platform. For example, when a user writes, “Does Landbot WhatsApp channel support send a list function?”, we need to figure out that the user is talking about our WhatsApp Campaign product, which is not clear from the text alone.
You will likely be facing one or more of these challenges yourself.
Building our Training Dataset
Computers don’t know how to read, so we need to transform text data into something the computer can work with. To that end, we use a pre-trained multilingual model to encode the text into a numerical representation.
This numerical representation is called a vector, and it is a sequence of numbers that represent the features of the text.
So, for example, the vector for the text “I love this product” would be something like this: [1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0]. NLP models have the property that allows similar text inputs to be mapped to similar vectors, and different texts would result in vectors pointing in different directions. The computer can use these vectors to identify related texts and also to learn how to group them.
The next step is to identify the main categories that we would like to be able to identify. To do so, we apply what is known as unsupervised learning. This technique is a type of ML algorithm that is used to find patterns in data when it is not labeled.
One of the most commonly used unsupervised learning algorithms for text classification is the K-means algorithm, which splits all the records into K different groups. After this work is done, we have our feedback clustered into 23 different groups.
For example, for the question “What’s the primary benefit you receive from Landbot?” the answers “Pretty easy to use” and “It is easy to set up” are both grouped together. Different answers such as "Too early to say" and "I am trying to discover that" are grouped into another category. The last step is to assign a label to each group that describes the entire set. Instead of having to annotate the entire dataset (5k records), we only have to annotate each of the 23 clusters with this approach.
Training and Evaluating our Model
Once we already have a sample of the feedback encoded and manually labeled, we can start training our AI model.
The model we decide to use is an artificial neural network (ANN). Any ANN is made up of 3 layers:
- Input layer – The input layer is where the vector of numbers representing the text is fed in.
- Set of hidden layers – The hidden layers are where the magic happens! These layers are composed of a number of “neurons” and learn how to recognize patterns in the vectors encoding the original text.
- Output layer – The output layer is the one that will output the label of the feedback.
We will not go into too much detail about how the ANN works, but we will just say that the neurons in the hidden layers adjust their parameters until the neural network can correctly predict the desired outputs for a set of training data. The network is typically exposed to a training dataset multiple times, with slightly different parameters each time, in order to find the best configuration. After many iterations (and a lot of computing power!), the ANN will be able to output the feedback label with a certain degree of accuracy.
After the model is trained, we test it on a new data set. This set is composed of text that is not used to train the model. To evaluate the performance of the classifier, we use precision and recall metrics.
The results table below shows that model performance is good for those labels whose proportion within the sample is high. Precision and recall metrics are up to 0.90 in some of the categories, which is pretty good! However, the performance is not so great for the labels whose proportion within the sample is low. Precision and recall metrics are below 0.50 in some of the categories, which means that most of the time the model is unable to identify those labels.
Deploying our Feedback Categorization Model
To use our model with new feedback entries, we need to “deploy it” so that we can run it when we need.
For this project, we decide to deploy the trained model as a Google Cloud Function.
This is an ideal way to use the model because it can be called by other applications or devices. For example, we can call the Google Cloud function exposing the model from a Landbot bot using our Webhooks feature!
Last but not least, we used the categorization model to build a structured feedback repository where we gather all the labeled feedback from different sources and enable everybody at Landbot to get insights into the feedback our customers send to us.
For example, our Product team can easily see how many customers are requesting new integrations and filter feedback for only that topic.
Summing Up
One of our main goals at Landbot is to understand our customer needs and pain points better. In this article, we showcased how we had implemented a semi-automated feedback categorization system to be able to classify all the text our customers send to us into distinct groups. The system relies on a natural language processing model, which classifies the feedback into a set of labels so that we can further analyze it.
Moreover, creating a centralized labeled feedback repository will help us track customer requirements and prioritize our development roadmap. From now on, as soon as a customer provides us with text feedback, it will be categorized and added to the repository. We will be able to prioritize our work based on the improvements that our customers suggest.
Last but not least, if you're interested in using AI in your Landbots, please don't hesitate to contact us. We would be happy to discuss how we can help you use this exciting technology to improve your business!