Tips for E-mail Tests

Testing e-mail prospecting lists is in many ways the same as testing postal lists, says Don Buck, president of Milwaukee-based consultancy Buck Marketing. But e-mail testing has its own distinct challenges as well. Here, Buck offers a few suggestions to making your tests more effective.

* The first thing to do is decide what to test. Just because something is measurable doesn’t mean it’s worth measuring. For instance, perhaps you could segment your e-mail prospect list by zip code. But if each segment contains only a few names, doing so wouldn’t yield any statistically significant results.

* When setting up an e-mail split test, just as with a postal mail test, you should have some idea of the responses you expect. The click-through rates can be measured on each variation. The opt-in e-mail copy for both segments of the test could be identical, for instance, but each could have a different subject line. And of course, inclusion of a link is important in getting a comparison of versions.

* If you don’t want to put all you faith in the service bureau doing the opt-in e-mail prospect promotion, you could design a landing page that is identical in each version of the prospect promotion except for the name (e.g., “pageA.htm” and “pageB.htm”). This way you can use your own server logs to measure results. Don’t be surprised if your results differ from the service bureau reports.

* Most list managers will include one split for free, but check with your broker to be on the safe side. The cost after any free splits is generally $100, depending on the number of lists, your total prospecting opt–in e-mail circulation, and the number of splits this could grow to a large percentage of the total mailing. So make sure each split is worthwhile.

* Use the reports from the service bureau to find out how many good addresses were sent. You will need this figure whether you use the service bureaus click-through data or that of your logs. The number of good addresses enables you to calculate if your results are statistically significant.

* Keep in mind that the timing of a rollout can skew the results. If the total universe is quite large and the control has been successful in the past, you may choose to retest or to back-test (use the control in a test quantity). When rolling out be aware of other factors such as a national event, a postal mailing to this universe at about the same time, and any applicable seasonality, which could affect results.