...

Application & Perception of Chatbots: Then and Now

Illustrator: Ana Galvañ
chatbot usage then and now

Please note that 'Variables' are now called 'Fields' in Landbot's platform.

Please note that 'Variables' are now called 'Fields' in Landbot's platform.

If there's one thing that evolved even faster than the chatbot technology or the providers' landscape in the past few years, it's the use cases companies come up with to delight users. 

We all remember the times - not so long ago - when a bot would only collect an email address. 

Today, things are getting more than serious.


The Early Days...

Yes, yes, you might be thinking: "Wait a minute! Luis from Microsoft and Watson from IBM have been around for a while, deploying gigantic and successful projects for Fortune 500 companies". I also agree that KLM's first chatbot's incursion was pretty sophisticated and surprisingly functional, true. But let's face it: five years ago, end-users were not even ready for what was coming.

A lustrum was enough for us at Landbot to see a significant shift in what users think, expect, and experience when stumbling upon a chatbot, and that has a lot to do with the change in how companies use them.

I'm happy to share that (likely biased) evolution with you. An evolution that started with chatbots trying to fool everyone pretending to be human (and failing miserably) to users genuinely enjoying finding out what type of coffee lover they are with an honest virtual assistant.

The Expectations Game

In the last century, when chatbots were born, they were being used mainly as an experimental tool whose main aim was imitating a human. 

The Turing Test (1950) is clear evidence of this. It consisted of a sort of exam the chatbot would only pass if the human having a conversation with it couldn't tell that there was a machine on the other side. And this was probably the origin of the expectations dilemma - first experienced by iconic Eliza

Many years passed until the big revolution of chatbots took place. 

In 2016, Mark Zuckerberg announced that Facebook would launch the Messenger platform with chatbots. This meant that millions of developers worldwide would now have access to a hub where they could create conversational experiences for the channel. 

The hype begins!

Overnight, thousands of companies started creating chatbots. It felt like the Social Media revolution, when not having a Facebook page was like shooting yourself in the foot. 

And what happens when you put a new technology of which main challenge is expectations management in front of millions of hungry end-users?

They'll get it wrong and will eventually try to break it.

I remember the very first chatbot we put on our website, back in 2016. 50% of the visitors would try to fool the bot - I guess to prove to themselves that they're better than a machine 😂 - while most of the 50% left didn't really know what to do with it. Some didn't even know whether they were speaking to a human or a machine. 

Ever since, we encourage our customers to have their bots clarify for the users that it's a virtual assistant handling the conversation. In other words, setting expectations at the earliest possible touchpoint. 

Moreover, using the first couple of messages to explain what the chatbot is capable of is often a must. If the user is looking for customer support and the bot won't provide it, let her/him know. Otherwise, they'll ask for it and will end up leaving you a negative review on G2.

Back to the 2016 craze... A quick search in Google Trends will tell you about how big the chatbot thing was becoming. 

In 2017, in the middle of the perfect storm, we launched Landbot achieving huge success among companies and individuals:

'Hyper-customizable website chatbots? Shut up and take my money!'


Dumb Chatbots to the Rescue!

Before continuing, it's important to share the two main branches of our conversational friends out there: rule-based and AI-based. 

While AI bots are what comes first to mind when thinking of virtual assistants, the truth is they were not the ones to “break the ice” with the audience. 

Why?

Precisely because they were trying too hard to be human. 

Indeed, AI-based bots give freedom to the user by processing what they are saying in real-time and providing NLP-based answers

However, as cool as it sounds, the problem with freedom is that it often comes hand in hand with chaos. 

You even run the risk of becoming a meme!

(PayPal's convo screenshot of a scam victim being greeted by their bot 😄🤦‍♂️)


On the other hand, a rule-based bot usually relies on CUI - Conversational User Interfaces - where users choose between a predefined set of options or submit pre-formatted answers

This version makes it easier to control the user's expectations since they already know which options they can choose. 

We used to kindly name them 'dumb bots'.

(Screenshot of a Landbot block with Button options - builder and front-end)

Interestingly enough, at Landbot we, originally, used to combine both buttons and open-text answers in questions if the user wanted to pick an unlisted option. We thought the experience would be richer... It wasn't! 

Users would type in things the chatbot was not ready to process and launch a generic "error" message since none of the pre-set keywords were triggered. Yep, we decided to remove it and stick to buttons.

The learning? 

People don’t care if the bot can pass for a human as long as it does its intended job

And so, the dumb bots did something their AI counterparts failed to achieve in decades - persuade the general public that they can be, indeed, useful.

But don’t misunderstand me, it’s not all black and white. AI bots are impressive. Today, well-trained and cleverly designed AI-based chatbots can grant you unique experiences. It’s just that not every business can afford a +5-figure deal to put it in place. 

Luckily, services like Google's Dialogflow can help you build human-like conversations at scale without breaking the bank. All the more, no-code platforms like Landbot enable you to enrich those natural conversations with clever (or dumb?) UI elements to streamline the interactions. 

TOP Secret Tip 🤫 : Sure, even if working low/no-code, you still need to put quite a few resources into bots development when using NLP, but it can be worth it. And we've built a whole course to teach you how!

The point is, dumb bots and their creative applications in marketing, support, and even sales paved the way to acceptance for all types of assistants.

Here’s what happened...

From Email Collectors to Conversational Apps

In the early days of Landbot, companies would create very simple chatbots, often mimicking our own. "Hi there, what's your name?" usually followed by "In case we get disconnected, can I get your email?" - what a classic! 

They wouldn't even connect it to anything but rather download the data captured straight from the app. And that was it. 

Months and months of development and polishing, and Landbot was just a basic form with eyes - that, well, did convert way more visitors into leads. 😎

As we delved into the Lead Generation space, we launched new features, question types, customization options, and, more importantly, integrations. We wanted chatbots to be at the core of your marketing stack, guaranteeing that the information flows seamlessly across your apps.

With that arsenal of cool stuff ready to impress users, we started creating templates. Lots of them! Simple ones, like the example above, but also complex, full-of-conditions ones. And, we also focused on user education through webinars, the Academy, knowledge base, and more. 

After that, we started to see things getting serious.

I very well remember a Success call with one of our early customers, a law firm, back in 2019... I had spoken with them a few months before. They had replaced their website form with a landbot to collect contact information from visitors interested in their service but told me they were building something else. During the call, they shared the screen with me as something was not working in the builder, and what I saw completely changed the way I looked at chatbots. They had tens of webhooks and tons of conditions that enabled a powerful conversation with the user. In their case, they were checking external databases and making calculations within the flow to provide quotes in real-time and send proposals via email while everything was stored in their CRM. "That's some powerful use case!" I thought.

And that is, if you ask me, the beauty of bots in general and Landbot in particular. 

A user might receive a newsletter inviting her/him to a webinar and register with only one click on a bot's button. Simple, uh? Only on the surface. Down the rabbit hole, bricks with hidden fields plus webhooks and Slack notifications enable the company to provide such a frictionless customer experience. 

And that's what this is all about!

Just because a bot doesn’t rely on natural language processing, it doesn’t mean it’s dumb. Behind the scenes, it’s probably hiding a treasure chest of integrations, careful connections, and complex calculations. 

While advancements in conversational tech are undeniably a key part of their growing popularity, it’s not what triggered the true change. 

The true change happened when we stopped building chatbots to be human and started to build them to be functional. 

The Future of Bots

There's more to come. 

Way more.

As we keep adding more and more functionalities to the counter - formulas, APIChannel integrations, lead scoring blocks, collaborative features - chatbots will get more sophisticated and powerful.

Modest, standalone experiences are followed by Communication Automation at its best. 

Companies no longer see chatbots as a conversational way to collect essential info but as a critical piece in their workflows that allows data to flow seamlessly, back and forth, to make business communications more efficient. And happier customers.