Buying decisions keep changing, and the last couple of years, in particular, have seen a big wave of transformation. One thing that’s remained constant, though, is the fact that most buying decisions don’t happen on that first website visit.
For marketers, it’s vital to understand users and how they behave at each stage of their buying journey to tailor their messaging to users’ standpoints in the timeline. This is where a lifecycle marketing strategy steps in — to serve as a customer barometer and help send out the right messages at the right time.
But even then, how is one to know what will resonate better with customers?
Phil Gamache is no stranger to the term. Experimentation has played a significant role throughout his career in marketing, including his most recent position as Director of Growth Marketing for Lifecycle at WordPress.com. It was even the subject of a three-part series of his podcast, Humans of Martech.
In the latest episode of Ungated Marketing, Phil Gamache shares his approach to marketing experimentation, with a particular focus on Growth and Lifecycle Marketing.
Lifecycle Marketing: a Customer Barometer
Lifecycle marketing definitions and strategies can vary from company to company. When he talks about lifecycle marketing, Phil Gamache uses an analogy based on today’s customers’ buying decisions. Most of them don’t happen after one single website visit, whether we’re talking about B2B or B2C customers, and there are a lot of different touch points where businesses can step in to influence them.
“Customers aren’t sure if you’re going to meet their needs; they need to read a couple more things, or ask for advice on a Slack channel or read your Twitter thread. It takes time to win customers, and there are multiple touchpoints that are going to happen. Trust builds over time, so I would consider lifecycle [as] a marketing strategy that includes all of those touch points.” Phil adds that these touch points can and should be mapped along the customer journey to identify customers’ pain points at specific stages of the funnel.
To put it simply, it is about understanding users and focussing on improving their experience, providing them value, and unlocking those pain points. Phil calls it a “customer empathy barometer,” the primary goal of which is to “send the right message to the right person at the right time.”
Easier said than done.
The big question is, how do you turn that premise into something actionable?
In terms of execution, Phil explains, there are a lot of Martech tools available that help marketers implement the type of communication or message they want to send out. However, the key lies in understanding customers and talking to them based on “where they are in the timeline, as opposed to where the companies are.” As Phil further puts it, “none of [the Martech tools are] really useful if you don’t have a lifecycle strategy, if you don’t understand your users, if you don’t know that, at the earliest stage of [their] journey, users in this segment are trying to do this job versus users that are in the second stage of the funnel [and] are trying to do something completely different.”
A Cross-Channel Endeavor
One part of achieving this is letting go of the idea that lifecycle marketing and marketing automation is all about email.
Phil explains that, even though email is where both concepts initially put down roots, nowadays, they are more of a cross-channel endeavor since there are many other channels customers, depending on their standpoint in the journey, prefer over email. There are SMS messages, mobile push notifications, in-app notifications, advertising, social media, to name a few.
That’s not to say email has become obsolete. Phil mentions it as a tool “to get [users] back into the product and activate them. But once they’re in there, you don’t need to use email.” If we’re talking about sending out the right message to the right person at the right time, we need to consider all the different channels that work best to achieve that goal.
All this begs a second question: even after you’ve decided on your lifecycle marketing strategy, how are you supposed to know which messages to send out through which channel to meet your objective?
Experiments in Marketing
Experimentation is a key element in marketing. But what exactly does it mean in that context?
For Phil, marketing experimentation “is less about trying to find that big win or that big idea. Usually, that’s what you think of when you think of experiments, ‘Oh, what’s that big idea that we’re launching?’ Major breakthroughs actually come in very rare forms when you’re talking about huge wins and big projects. Typically, experimentation is a game of continuous small wins that add up over time incrementally and boosts revenue metrics.”
Here is how he usually approaches the process of setting up a new experiment. First, there is the ideation stage, where he sits down with his team to come up with ideas. Then, there’s the prioritization of those ideas based on the effort needed to apply them and their likeness of impacting key conversion metrics. Finally, there’s the experiments’ analysis to understand their results.
In Phil’s team, there is an extra fifth step to each marketing experiment. He explains: “a very basic definition of a good experiment is when you're able to learn something from it. And if you're not like sharing and documenting the insights that you get from the results of your experiments, are you really learning from it? In my past, I've always had an extra step in that process where we build out this kind of internal knowledge base of results from past experiments. And they're shared on a monthly basis [with] different folks in the company.” This knowledge base also serves as a way for his team to keep track of the experiments and not waste any resources on something that has already been tested and for which they already have results.
Startups vs. Bigger Companies
Thinking back on his experience at Klipfolio, he recalls how different it was to run marketing experiments there than presently at WordPress.com. “There was very little scientific rigor in our experimentation because we were trying to move really, really fast; we were like a two or three-sized person marketing team. So on top of email and lifecycle and automation, we were focused on ads. And we were focused on product marketing, and we [owned] the website; there was a ton of stuff to do. There wasn't enough time to spend two or three weeks designing a rigorous A/B test where we're only changing one or two things and trying to optimize something that's existing.”
More often than not, in startups or other smaller organizations, what’s being launched is completely new. Phil mentions that A/B testing and experimentation isn’t really set up for net new ideas but rather is made for optimizing things that already exist.
What he did at Klipfolio and, later, at Close, was take a more baseline approach to experimentation. He clarifies: “If we’re using a free trial email onboarding, for example, and we have nothing there right now, we basically have a baseline already of how many of those free trials convert to paying customers. And if we were to launch new emails to those users, we can just launch them to 100% of free users. And then, we compare the one month that we launched this versus the month that we didn't launch it. What were the differences in conversion rate? And then we make the best-guess approach, like ‘we lifted it by X percent,’ it must be the right decision.”
In startups, that’s usually a good enough approach. Phil further argues that not everything needs to be a full-blown experiment. There are certain changes or new marketing tactics, like launching a new onboarding sequence in your product, that don’t require an A/B test. For him, comparing the results of something you’ve recently introduced to a baseline is much more valuable than doing an A/B test.
In bigger companies, however, the scenario looks a bit different.
Since joining WordPress.com, Phil says his world has been “rocked a little bit in experimentation” because it was his first time working with a team of data scientists and people responsible for all marketing data. In the past, in smaller teams, that part of the process fell on his plate as well.
And the experimentation process itself is different, too. Referring to the same email example, Phil says: “In a bigger company, when you're optimizing things that are already existing, [for example,] there will already be a free trial setup, right? So it's not about launching a completely overhauled new experience of that. It's trying to find ways where you can optimize and change a few things, to maybe improve email deliverability. Or, maybe, we have new insights that tell us that free users are converting to paid users, if they do X and Y, [then] maybe our emails are going to start focusing on doing X and Y in the first couple of lines of our CTAs.”
Broadly speaking, at bigger companies, when it comes to the prioritization part of the process, there’s more of a balancing act between what you want to experiment with and what you can track results of. The feasibility of result tracking is one of the main ranking factors of the experiments that come up during the ideation phase.
Regardless of business size, experimentation is part of the culture that grows in your company and that teams should align on. As Phil puts it, “the marketing team and the growth team might have a very scientific approach to A/B testing and optimizing on stuff. But if your product team is moving really fast and making big changes, and not necessarily reporting and setting up experiments, that’s something that the culture needs to align on” in order for experiments to be successful.
How to Run a Marketing Experiment vs. How Not to Do it
Now, when it comes to the practical side of things, we know the outcome of an experiment can go both ways and either work or fail.
Phil shares two examples of experiments he ran that show both sides.
As a successful example, he recalls an experiment he ran at Klipfolio, a dashboard tool that offers specific dashboards to different verticals like finance, sales, or marketing. However, the business was positioning itself as a dashboard for all shapes and sizes, and Phil mentions some difficulty they were having in doing specific positioning for each vertical. They ran the following experiment to address that: “We experimented with doing a welcome email. We had a free trial for Klipfolio [...], and when folks started a free trial, [they] got a welcome email, a very templated approach. But what we wanted to experiment with was, how can we segment that welcome email and send it and use the same language that someone in finance would, and send that to finance verticals, and send a different message to sales [people].”
As Phil continues to explain, customers in distinct verticals used different Klipfolio dashboards and cared about different metrics, so he saw a need to personalize the communication with each of them. “What we did was a really cool experiment. We started with three or four segments of verticals. We started asking on our free trial form, ‘what is the closest resemblance of your role right now?’ So we had marketing, sales, dev, and I think data analysts was the fourth choice. And then we had the welcome email rewritten by someone internally at Klipfolio that was in one of those job roles as well. So I sat with our Head of Sales, and we rewrote the welcome email, and it came from the Head of Sales. All of our welcome emails were welcoming users, and [they were] coming from someone that worked in the same world that they did and was using the same language and sharing dashboards that they used internally at Klipfolio.”
Before, Klipfolio’s welcome emails included a generic list of five dashboards that were users’ most popular choices but weren’t necessarily what each individual was interested in. But then, they started receiving emails from someone that worked in the same area as them and getting area-specific information that they would make the best use of. As a result, “depending on the vertical you looked at, we had a 30 to 40 plus percent lift in click-through rates, just by sending a more personalized email from someone who lives in the same world.”
Another experiment Phil ran at Klipfolio wasn’t as successful.
To get users reengaged with the company and convert them into paying customers, since it was very vertically driven, they were sending very specific emails to people with a high buying intent. He describes: “[When] expired trials that came back on our website and looked at one of our key integration pages, that would fire off an email a day later talking about that specific integration, and pushing to support if they needed help like integrating it.”
Initially, it seemed like it was working. Looking at the reports, it seemed like everyone who got one of Klipfolio’s new emails was converting well. “But when we did a bit more analysis on it, and we were trying to expand [the experiment] and we did a holdout test on it, [...] we looked at this cohort of people that are high intent and coming back and looking at these product pages [and thought] ‘let's actually not send emails to like 10% of them.’ And there was a very small difference between the holdout group and the group that we were lifting.”
In the end, Phil’s team realized that those high intent users were coming back to the product anyway and that the emails had very little impact.
Even though they might have invested time and effort into an experiment that didn’t meet initial expectations, it just shows that failing is part of the process. Without experimenting with different tactics, you won’t be able to figure out what works or not.
Balancing Data and Creativity
A lot of marketing experimentation has to do with data, but there is a heavy creative component to it as well, especially during the first phase of coming up with ideas.
This means there might be a need to balance creativity or the more creative parts of an experiment with the data that doesn’t always point in the creative direction one wants to take.
Phil explains how he does it.
“Everything that has to do with marketing automation and lifecycle can be lumped up into two buckets. You have this cycle. At the top of the cycle, there's one bucket, and you have [another] bucket at the bottom of the cycle. At the top is your creativity: you're coming up with campaigns and ideas and copywriting. I like to think of the ideation phase of experimentation as totally a part of the creative process. [...] A big part of that creativity comes from coming up with ideas and focusing at the top of that marketing automation or lifecycle cycle, but it doesn't stop [there].”
In addition to the creative side of experimentation, there’s looking at the results and what the data tells you.
“You do need to report on things, and your automation tool is only as powerful as how well trained your team is on it and how well integrated it is with the rest of your stack. And what you're doing is actually lifting key metrics. Are you doing experimentation on all that happens at the bottom of the cycle? It [feeds] into each other: coming up with ideas, and then you report on them, and you have learnings, and then you go back to the top, and you have new ideas, and you'll optimize on those. So that's how I balance it.”
For Phil, though, this isn’t always a mindful process during which he blocks time for creativity and then for data analysis. He mentions it all as part of the same flow. Particularly in growth-driven companies that focus a lot on experimentation, as is the case now at WordPress.com, he explains that “you don’t necessarily have to dedicate time for one or the other, it just kind of comes naturally in the flow of projects.”
Understanding customers and meeting their needs wherever they are in their journey will always be a vital part of marketers’ jobs. And in order to truly understand what message will resonate better with them at distinct standpoints, there is no better way than to experiment with different tactics and see which ones deliver the best results.