Many students approach tests with dread. So do many e-mail marketers. But they shouldn’t. Conducting e-mail tests may seem overwhelming: Which variables should I test? How can I be sure I’m doing it correctly? But if you approach it methodically, testing need not disrupt your overall routine.
In fact, many marketing professionals believe testing should be part of your routine. Marketers should test within every message sent, says Kristen Gregory, e-mail marketing strategist at Bronto Software. “If nothing else, do a simple A/B split on the subject line each time and optimize results.”
SUBJECTS AND OBJECTIVES
What you should test depends largely on your goals. If you’re like most other marketers, your ultimate goal is to increase sales. That means determining what elements of your campaign are most likely to generate a lift in sales.
If your e-mail open rates are below average or have been on the decline, you might want to home in on subject lines first. If you’re happy with your open rates but disappointed in your click-through rates, perhaps you should make testing offers or calls to action a priority.
And if it’s your e-mail conversion rates that seem most in need of improvement, test your landing pages. Marco Marini, president/CEO of ClickMail Marketing, believes that landing pages are one of the most important, and most overlooked, elements when it comes to e-mail tests.
If in doubt — “I want to test everything!” — you’re probably best off beginning with subject-line tests. After all, they’re relatively simple.
Subject lines “don’t necessarily involve as much work as, say, a split on offer type, where you’d need two different pieces of creative, possibly approval by certain individuals to do so, set-up for different promotion codes, potential training of customer service, etc.,” Gregory says.
What’s more, subject-line tests can provide the greatest impact. Just about everything, including click-throughs and conversions, depends on recipients opening their e-mails, and subject lines are a key factor in open rates. “In the end, subject lines matter to everyone,” notes Adam Q. Holden-Bache, CEO/managing director of Internet marketing agency Mass Transmit.
Once you’ve determined which component of your e-mail to begin testing, you need to make sure that the test is a meaningful one. That means making certain there’s a clear difference between the elements being tested, says Stephanie Miller, vice president of market development at Return Path.
Tests conducted by outdoor apparel merchant Icebreaker during its holiday 2009 e-mail campaign offer several examples. In one test, part of its file received copy spotlighting the high average customer rating on Buzzillions.com; the other segment received copy highlighting a stellar magazine review.
According to Dylan Boyd, vice president of sales and strategy for eROI, which worked with Icebreaker on the campaign, the messages with the consumer reviews generated more sales than those with the magazine review. In another Icebreaker test, one segment received a promotion code for a free beanie with the purchase of any item from the company’s GT apparel line; the other segment was sent a code for 20% off any GT purchase. Although the read and click-through rates for both segments were similar, the free beanie resulted in more conversions.
These tests exemplify another recommended testing practice: Unless you’re an expert in testing, or have access to experts, test just one element at a time. “When you test more than one element, results get fuzzy,” Gregory says, “and which element really made an impact becomes more difficult to determine.”
Singling out which element of an e-mail to test isn’t the last of your decisions. Multiple components often make up each element.
When testing subject lines, for instance, you may opt to compare a short line vs. a long one, one that mentions brand names vs. a more generic line, whether to repeat the company name in the subject line if it’s already in the from line. You might even test whether or not to use exclamation points.
To help you winnow your options, Miller suggests looking at past customer behavior and other data for clues as to the type of changes most likely to produce a change in performance.
Let’s say that a review of your open and click-through rates for the past year shows a significant spike in response for three particular e-mails. Looking at those three messages might reveal that they had more-promotional subject lines than most of the others, or that they included a different style of body layout. You might then consider testing hard-sell vs. soft-sell subject lines or two styles of creative templates.
KNUCKLING DOWN
Now that you’ve settled on what to test, you need to settle on how to test. If you’re going for a simple A/B split, be sure that A or B is your control. In other words, don’t test two new elements against each other; if your subject lines tend to be about six to eight words long, don’t test a 20-word line against a two-word line.
“I feel the value of a control group is frequently overlooked by marketers,” says Holden-Bache. “Make sure you’re able to determine if your test results provide an increase or decrease” against your usual practice.
And that requires having a large enough sample. How large is large enough, however, is open to interpretation.
A good rule of thumb, Miller says, is to run the test on at least 10% of your file, which means if you’re doing a straight A/B split, the remaining 90% is your control group. But if your entire list is relatively small — in the thousands rather than tens of thousands — 10% may not give you a large enough test segment to be statistically valid, Gregory adds: “As a rule, I say go larger with test segments if the list is on the smaller side.”
And don’t implement sweeping changes based on just one test, especially if your sample size is small. “If you have only, say, 10,000 names, do the same type of test over and over until you see a trend and then feel comfortable enough to move on,” Marini says.
Once you have run several similar tests and feel confident in declaring one of the tactics a winner, be sure to put that knowledge to good use. As our teachers used to say to us in school, the test itself is less important than the information within it.
Trial and errors
Don’t let these common mistakes result in a failing grade for your e-mail tests:
Not waiting for the test to finish. You sent out an A/B split to half of your file at 9 a.m.; now it’s 11 a.m., and your test version is outpulling your control significantly. Should you go ahead and send the remainder of your file the test version?
Not necessarily. Unless you reap most responses to your e-mails within two hours of deployment, don’t assume that how these early-bird responders react will indicate how your file as a whole will react.
“You need to be sure that subscribers have had enough time to see, open and interact with your messages,” says Kristen Gregory, e-mail marketing strategist at Bronto Software. A test segment that looked like it was “losing” when you declared a winner can pull ahead in the end, she says. “Always take a second look when more time has passed.”
Overlooking the importance of timing. Send your tests at roughly the same time of day and day of the week as you plan on sending the campaign once you roll it out.
Some marketers test subject lines once, see small lifts, send campaigns with a similar subject line at a later date “and then experience results lower than pretesting,” says Adam Q. Holden-Bache, CEO/managing director of Mass Transmit. “What they ended up realizing was that the time of day their campaigns were delivered influenced the results more than the subject line.”
Failing to repeat tests. Run similar tests several times so that you can “trend the findings,” as Stephanie Miller, vice president of market development at Return Path, puts it. “If you run a test three times and have consistent results, then you have a defensible position.”
But don’t assume that the winner of a series of tests done six months ago will remain the winner. Fashions and trends change quickly in e-mail. But for too many companies, Miller says, “something that was tested six months or six years ago sometimes becomes part of the lore of the organization, and they never retest it.”
Assuming that general best practices are the best practices for your business. Conventional wisdom states that subject lines written in all caps are less effective than those that use initial caps or sentence-like capitalization.
But at least one well-known online merchant continues to use all caps in its subject lines, because tests among its customers show that they pull better. “Don’t assume the results that someone else is experiencing are going to be identical to yours,” says Marco Marini, president/CEO of ClickMail Marketing.
Using only one or two metrics to determine a winner. Subject line A may generate a higher open rate than subject line B — but if sub line B generates higher sales per e-mail sent, then sub line A may not be the ultimate winner. — SC