COMMON MISTAKES IN LIST SELECTION

The art of list selection requires an exhaustive knowledge of your industry, your audience, your program strategy, and fairly complex analytical skills. It also requires teamwork among the catalog marketing and analytic experts, who need to work together to develop the list strategy, the testing strategy, and the list analysis. Then, once you have developed this comprehensive strategy, you use a list broker’s expertise to execute the plan.

But many catalogers – even seasoned marketers – tend to make the same mistakes when selecting mailing lists for testing and rollouts. While there’s no substitute for years of hands-on experience with the specific marketing vehicles and programs you’re testing, understanding the potential mistakes in list buying can help you avoid them. Below, some of the most common errors in buying lists.

– Failure to establish both response and learning goals

It is critical to understand upfront if the primary purpose of your mailing is a short-term response or long-term learning strategy – you really can’t do both well. Let’s say you need to generate the maximum return in short-term sales. You would limit the list selection to key transactional, demographic, lifestyle, or geographic variables that you think will result in the highest response. Many catalogers do this because they want the response to justify the cost of the test. The caveat here, though, is that short-sighted tests of specific selects won’t help you determine if broader selects will work for you.

On the other hand, if your goal is to maximize long-term results, you would make much broader list selections and use variables that would support subsequent analysis. For instance, a cataloger/retailer might use a geographic select of prospects within a 10-mile limit from a store location. In a broader test, you might include prospects from within a 20-mile range, and you might find that you can target customers from a 15-mile range and still do well. Once the results are in, let the data tell you (through response models) which selections are the most effective. You can then rollout with a much higher expected response.

– Not understanding a list’s future potential

Believe it or not, many list selections don’t fully take into account the future potential of a list in terms of rollout universe or frequency of update. When selecting a list for test, you first need to understand how many names are available to you when you roll out, and determine if that number is adequate to justify testing. I often hear, “Well, it’s a small but valuable universe.” This isn’t enough of an evaluation; you really need to run full economics to determine the cost to acquire, test, clean, dedupe, and customize communications for the list and then compare that to expected results.

Also, you have to consider how frequently the list is updated. Some marketers test lists and forget to check how frequently new names are added to the file. They see a large universe and assume it is updated regularly, not realizing that once they have used the names once, new names are added infrequently.

– No knowledge of data sources

Many lists draw upon the same data sources, though this is not immediately clear when you are renting the lists, and certainly the list owners do not promote this fact. I’ve seen tests of two lists that have virtually the same names on them, but you don’t realize this in testing because there’s little chance of overlap with small quantities. But in rollout, you would begin to see greater duplication between the lists.

Always request specific documentation on how the list is compiled. For example, if it is via survey, find out if it is a proprietary or syndicated survey, and if syndicated, which survey was used. For any specialty lists, find out if the owner sells the list to other media companies or brokers to be included on lifestyle lists.

– Not sizing test lists correctly

This is probably the most common error in selecting lists. You cannot size a test list until you have completed a full analytic plan; in other words, you must know what data you plan to analyze from the test. Steer clear of anyone who tells you to size the test based on a standard number, such as “I always use 20,000” or based on a percentage of people contacted (“always use 20% of the contact base”).

To get a statistically valid read, you should always size a list based on expected response. To do this, you should know exactly what response you are looking for, and how you want to cut the data for analysis. For example, if you are only looking to compare prospects to catalog requesters, that is one number and will require a certain size control group. But if you want to look at requesters who converted to buyers, that will require a bigger test number, and if you want to look at conversions who make more than one purchase, that means a bigger test still.

You may also want to evaluate subsets within prospects: How important is geography, age, interest in gardening? For each of these questions, you need a progressively larger test group. Again, you should use a statistician to determine the numbers you need, but in general, you need a minimum of 35-50 respondents for each cell.

Of course, you can have the opposite problem: oversizing a group to be safe. There is a huge cost to this as well, since oversizing a test list will limit the learning or results you could have obtained with the same dollars.

– Not understanding variables

It is important to understand not only what data variables are available for list selection and analysis, but also what percentage of customers actually have these data elements overlaid on the file. After income and age, other data elements fall in the 20%-50% range for overlays; while with lifestyle data such as hobbies, mailers are lucky to get 20% of the file overlaid with such information.

So when comparing lists, focus not on who has the most data elements, but who has the highest percentage overlaid of the elements that are important to you. This will allow you to produce the strongest response models for your business with reasonably sized test groups. A statistician can determine the test quantities needed for each specific list to produce a viable model based on the variables important to you.

– Underestimating compiled/lifestyle lists

Many marketers immediately go to high-priced specialty mailing lists without testing compiled and lifestyle lists. Because data variables have grown richer and more accurate as databases have become more sophisticated, using compiled lists in conjunction with response models often produces a better or similar result at lower cost than a specialty list.

– Underestimating the long-term costs of lists

Negotiate rollout costs at test! You might get a great test price, test the list and find that it works well for you, only to find that the rollout list price is is higher – and nonnegotiable. Get rollout cost for at least the next year in writing to properly determine whether you should include a list in a test.

– Using history the same way for online and offline lists

History, testing, and learning may be the best factors to base decisions on for conventional mailing lists, but not for online names. The online audience is changing daily. More and more average consumers are going online, which means new additions to previously tested online lists may not perform the same way the initial test list did.

– Incorporating operational needs

Be sure to factor operational limitations such as call center staffing and fulfillment ability into your list buying decisions. (See case study, page 73.) If these systems cannot be adjusted in testing and rollout to accommodate the types or numbers of customers expected from a particular list, you may want to adjust your strategy.