Isa Garate Landbot's Product
Isa Garate

Landbot

Experimentation Excellence: AI Product Development with Isa Garate

How AI can help personalize and optimize the customer onboarding process
The importance of consistent measurement in developing and refining AI technologies
Critical metrics to measure across AI implementation and iteration
landbot logo
landbot logo
TABLE OF CONTENT
TABLE OF CONTENT

Experimentation Excellence: AI Product Development with Isa Garate

It’s no secret that sales are all about people and problems. The more you understand the people, the problem, and the product, the better the sale goes. Solving problems makes a sale work in the long term versus providing a temporary solution. 

AI is an impactful ingredient for improving revenue pipelines due to its streamlined capabilities of unearthing information on leads and compiling the data without burdening or slowing down potential customers.

Isa Garate, VP of Product at Landbot, has harnessed her years of experience in product management and strategy to drive experimentation and growth at Landbot through AI integration and experimentation.  

In her conversation with Rachel Ann Kreis, VP of Marketing at Landbot, and Jiaqi Pan, CEO & Co-Founder at Landbot, Isa shares her view on what it takes to successfully apply AI to the sales process and product development.

How AI Helps Personalize and Optimize the Customer Onboarding Process

Close to Landbot's beginning, Isa noticed users were running into issues while using the tool and weren’t sure what they were looking for. There was a disconnect between product capabilities and user comprehension.

“We didn't know what to do. So we put an option in [the sign-up process] that said ‘Build it for me,’ and it connected to a product person,” Isa says. “We would try to build something for them. It was our way of understanding what they were seeking.”

After the explosion of ChatGPT, that option turned into AI building a solution for users based on the context they would provide.

“We learned over time that the more focus you give to the AI, the better results it will give you. If you provide a huge context, it will probably hallucinate because it doesn't know everything,” Isa says. “We understood the more context you give and the smaller the scope base, the better it works.” 

Fast forward to today, and that's the goal: personalizing user experience in the product to deliver the best solution possible.

AI thrives when it has the answers to questions such as: 

  • What is the end goal of this bot?
  • What are the long-term goals?

With specific data and parameters, AI can build the solution users are looking for and help them onboard without the stress and overwhelm that sometimes accompanies new tech adoption.

Consistent Measurement in Developing and Refining AI Technologies

People make mistakes. Learning through failure is intrinsic to refining and hypothesizing through new advancement and experimentation — including considering bias — and the path to better AI implementation is no different.

“Every person has a different context and environment,” Isa says. “That means not everyone is the same, and we need to test our thoughts and hypotheses.”

This iteration process starts with a few simple questions: 

  • When does the user have the problem?
  • How often do they have the problem?
  • How urgent or important is the problem?
  • How many users are affected by the problem?

Determining the answers to those questions helps uncover where the priority should lie and the direction of the solution.

“We'll measure whether it's working or not, and we're trying to give it a timeline,” Isa says. “So [for example] we will measure within two weeks, and we need to achieve a 5-10% improvement, and then also try to emphasize the cost of the experiment.”

Once you determine the cost matches the outcome, you must continue to measure and track the improvement in the percentages you initially decided on to determine if the solution is viable long-term.

“We need to measure because if you say, ‘Okay, if I give it one more week, then it will work,’ it's kind of — you fall in love with the solution, and then you will think that's the only one, and there are no other solutions that could work,” Isa says.

Falling in love with a solution to the extent of blinding yourself to alternative methods can result in a slowed-down process and unnecessary expenses. Keeping a close eye on iterations and outcomes helps mitigate that risk.

What to Measure Across AI Implementation and Iteration

There are two main metrics to determine if you’re going in the right direction.

Quantitative metrics Measure whether users are achieving their objectives. For example, if the tool's objective is to schedule appointments, a successful integration would allow users to schedule the appointments. Costs are also a critical aspect of quantitative metrics.

Qualitative metrics: Determining how users feel while using the tool, their mood, and their satisfaction with the service. Because chatbots tend to have a negative connotation, it’s crucial to measure what emotions the program elicits.

“We need to understand how technology is performing so we can integrate different technologies if something is not working. Cost is also a very important thing in AI,” Isa says. “Nobody knows how it will evolve, where it will evolve, or what new trends we will have, so you need to understand what is cost-effective for users and for you as a company.”

As AI use cases continue to evolve and unfold, consistent and intentional measurement is a must for successful implementation and iteration.

Interested in learning more? Listen to our full conversation with Isa on the latest episode of Ungated Conversations, where we dive into leveraging AI in the onboarding process, effective measuring and hypothesis, the importance of qualitative and quantitative metrics in AI technologies, and more.

Listen on Apple Podcasts, Spotify, or your favorite podcast player.