5 Tips for having a BLAST with A/B testing


As a marketer you often need to optimize between two or more options: which subject line leads to a higher open rate for your email campaigns, which landing page drives higher conversion, or which homepage content drives better SEO results.

But sometimes you want to answer a slightly different question: which marketing actions drive incremental purchase behavior, compared to doing... nothing?

Let me explain using an email marketing example: Imagine you want to launch a 10% discount email campaign to re-engage lost customers. A reasonable question to ask would be, how many customers will use this discount to make a purchase who wouldn't have made one otherwise? If your campaign converts 50 customers, but all of them would have actually made purchases on their own without the discount, then you haven’t created value for your business. You've just eaten into your margins by spending marketing dollars on an unnecessary promotion. From a revenue perspective, if the campaign generated $100K in revenue, but $60K would have been generated anyway regardless of the campaign, then the incremental revenue of the campaign is $40K ($100K - $60K).

Of course we can’t know for sure what somebody would have done if they hadn't received that discount email. But it turns out we can estimate it with remarkable accuracy by staging an experiment with a control group, also called a holdout sample. This means identifying a random subset of your population to receive nothing – and then comparing their behavior to that of the group that did receive it.

It can be tricky to successfully launch a controlled experiment, but the dividends can be huge. A control group offers you an effective baseline, enabling you to get a quick read on what’s working on what’s not – and to quantify the exact ROI that your strategy is driving.

So now that you know why a control group matters, how do you actually go about setting one up successfully? Just make sure to have a BLAST:

Marketing teams often treat controlled experiments like a visit to the dentist’s office: unpleasant but necessary. They’ll try to shrink the control group down to near-nothingness to maximize the earning potential of the test. The problem is that without a good-sized control group, it can be difficult or impossible to get a read on the baseline – that is, what your customers would have done without getting your email. Rule of thumb: start out with a 50% control group, then whittle it down as you start figuring out what ideas work best.

The control group should be identical – absolutely identical – to the test group. If you’re testing a 5% offer on customers from California, the control group should also consist of customers from California. The only way to ensure a true lookalike control group is by defining the segment, then randomly allocating customers between the test and control buckets. If your email platform does not provide a built-in control group feature, you can use Excel or a similar tool to create a random control group yourself and upload the two groups to your email platform.

The idea with a controlled test is to be able to figure out exactly how much incremental difference your idea – and your idea alone – is driving. So outside of one major difference, the treatment and control groups should be treated exactly the same. If the treatment group is suppressed from your weekly newsletter, the control group should be as well. If the control group gets a Hulk Hogan ice sculpture in the mail, the treatment group should as well.

The world changes. Your competitors launch promotions, your position in search rankings changes, and your strategy shifts over time. For this reason, it’s vital to compare your test group to a control group tracked over the same period. In other words, last year’s holdout group doesn't count, nor does last week’s.

Because the whole point of a controlled experiment is to measure the test group relative to the baseline, you need some ability to track the behavior of your control group over time. This is why ESPs can’t offer true holdout testing: it’s impossible to track what the open and click-through rate of customers who didn't receive a particular email. Make sure you have a way of following those control group users so you can compare them to the group that got your campaign in terms of revenue, site visitation and conversion during the test period.

To recap, the tldr; version:
To have a BLAST with your next marketing campaign and measure its true incremental impact, make sure you keep a holdout group that will not receive the campaign.

>> If you found this post helpful, check out Custora U for quick courses about Customer Lifetime Value and Segmentation. 
Custora U

Like this? You might also enjoy these.

Big News: Custora's Next Chapter

  I am excited to share that today we announced the acquisition of Custora by ...

Swatting Buzzwords: 360-Degree View of the Customer

There are some terms in the retail marketing tech world that get tossed around...

Meet the Marketer: Kelsey Foy of Eloquii

In this ongoing series, we chatted with some of our customers to learn more...