You’re applying factors such as segmentation, personalization, lifecycle management, triggers and interactivity to your e-mail campaign. Great. But how do you really know whether a factor is performing as intended? You don’t, unless you test it.
Systematic testing is essential to determining how well e-mail is resonating with recipients and to gathering more data to help craft campaigns and fine-tune messages.
Test who, test what?
There are essentially two primary types of testing: control group and variable testing. Control group testing is conducted by taking a small portion of your database (somewhere between 3% and 10%) and doing nothing new, just proceeding with business as usual. Then, by taking the rest of your audience as a second test group and monitoring for an extended period of time the effects of a new treatment (i.e. welcome message or new navigation). This allows you to measure the incremental impact of various programs, mailings and messages.
Through variable and multivariate testing, you measure and test individual elements (i.e. audience, subject line, offer, content, frequency and creative design) to determine what’s working. However, note that you shouldn’t test too many elements at once because results can quickly become difficult to “untangle” and audiences may become too small to represent statistically significant results.
The fact is that neither control group nor variable group testing should be conducted in isolation. You can even use control groups within variable testing to measure the lift resulting from the specific variable change. Additionally, with all testing, proper planning is as important as execution. Often, people don’t think about how they’re going to analyze test data, and that plays an important role in how to perform a test. For starters, make sure you have a broad enough test bed and ensure what you’re testing aligns with the key metrics you use to measure overall marketing campaign performance.
How are your relevance factors performing?
Are your current segmentations effective and reflective of what interests each segment most? One simple evaluation is to test your subject lines across the various audience segments. You may find, for instance, that bargain shoppers respond better to a subject line “sale” message, while designer name brands in a subject line resonate with the fashion forward segment. (The e-mail channel provides an easy way to test price points. For example, you can deploy two messages with different price points to evaluate whether a 10% reduction in price will drive more purchases and greater overall ROI.)
This factor is one of the easiest to test. By using customer names in e-mail sent to one group and not including them in messages to your control group, you can determine whether using a name drives an incremental lift.
Additionally, we often think of preference centers when we think of personalization programs. You can test which single preference a customer better responds in order to drive greater relevance in future communications. For example, if you ask customers about when and where they are planning for the next vacation, and then split the audience and send an e-mail with “when” as the primary message, and another with “where” to those respective groups you can determine whether “when” or “where” is the principal motivator.
The objective here is to determine what message not only best resonates with your audience, but also drives them to “convert” to the next stage of their lifecycle. By evaluating the responsiveness of a customer over time, you can clearly see if the relationship is evolving, and use that information to plan future campaign content and timing.
This factor is all about timing, so test your e-mails to determine the optimal time when a recipient should receive a triggered message to elicit a behavior (i.e. immediate versus 24 hours, 3 days, or 5 days). You should be testing one message versus a sequence to evaluate how many e-mails, how often will make the biggest impact on your audience and overall, determine how triggers affect baseline strategy.
Abandoned shopping cart programs are a great example. By their nature they generate a lot of testing variables, including how many days after abandonment to send a reminder, whether you present a discount or offer to drive recipients to purchase the item(s) in their carts or, do nothing. Some companies have found there may be only a very small incremental difference in the revenue generated from a reminder versus a special discount.
Is interactivity taking away from your call to action or enhancing the user experience? Test to determine if by incorporating things into your e-mail messages such as video links and user-generated content (i.e. survey results, product review ratings) a customer is more likely to make a purchase. Even testing something like creative layouts, such as short “postcard” style versus lengthy copy, can fine-tune your e-mail program tremendously.
Don’t waste time, don’t lose money—test everything
Unless you’re systematically testing your e-mail programs, you could be squandering time and money either approaching your audiences with messages irrelevant to them or making unnecessary offers. Consider the opportunity costs of sending useless e-mail and the fallout of creating e-mail fatigue, which wastes the number of non-relevant “touches” a customer will accept.
Only through testing can you ensure that the other factors of relevance are positively influencing your e-mail marketing initiatives. Testing is also the most effective vehicle for gathering information that can help you to design and/or adjust campaigns to improve future e-mail performance, and thus, your company’s bottom line.
Ben Ardito is vice president of professional services with e-Dialog.