In order to effectively prospect, you typically need to decide not only who to mail, but also who not to mail. By bringing in only your top prospecting sources on the front end and then applying several suppression techniques on individual names, you are able to greatly increase the performance of your prospects. The end result is that your prospecting metrics will naturally increase and you reducing your overall costs by suppressing the non-performing names—a win-win situation in my book.
This idea of finding names to omit from the mailing is something that we at Lenser rarely see when visiting catalog companies. There are many techniques that can be employed in order to isolate names that should be suppressed. Some of these include creating multi permutations within the merge, optimizing rental singles through a co-op database, identifying low-yield addresses through advanced address hygiene services, etc.
We recently started working with a large multichannel company that needed to employ this type of processing to help increase the performance of its prospecting. The company has seen a steady decline in response from their prospecting sources over the past five years and needs to figure out how to stop the bleeding. The answer, we discovered, was not who we should select to mail, but rather who we should select to not mail. There were several techniques we identified that would help in finding those names to suppress above and beyond those mentioned above.
Note that in the previous statement we say “finding those names” and not “those lists or those sources.” The traditional strategy for a catalog company is to “rest” a list when performance starts to drop, yet this is a mistake. When prospecting with a catalog you are marketing to a person, not to a list. When you rest a list, you are throwing the baby out with the bath water, because you are suppressing the good names along with the bad names. The key to successful prospecting is to find the good names to mail and to find the bad names to throw away. The client we are working with had a couple of immediate suppression opportunities that we identified.
The first discovery was that a significant portion of the prospecting was being mailed into zip codes that had never yielded an order in the history of the company. Zip models are a tried and true method to help identify higher yielding names within a prospecting source by targeting “like families” within a community. In this case, we are reversing that strategy and using the zip model to suppress names within zip codes that are unproductive. In reviewing the 2006 mail files, it was determined that almost 20% of the prospecting catalogs mailed went to zip codes that did not yield one order and had previously never yielded one order. This is an indication that there is a grand opportunity to suppress certain zip codes in our future prospecting endeavors.
We had a basic zip model built by our client’s internal IT/analytics team, which we utilized when ordering select outside lists. Since a zip file can dramatically decrease the available universe of a given source, it was imperative that we only use the zip file suppression on those sources that had a large enough universe to begin with. We were able to apply the model to approximately 50% of the outside list names going into the merge. We did not apply the same zip file to our co-op database sources, assuming that their models would automatically use geography as one of the top variables. When we analyzed the mail files post-implementation of the zip file, it was determined that only .08% of our prospecting was now going to non-productive zip codes. With a simple zip file applied to select outside list sources, we were able to drop our unproductive prospect mailings to under 1%. This low figure also supports our assumption that the databases were already heavily weighting this variable in their models.
Due to the unique product offering of this client, it was determined that it was imperative to mail to homeowners and to suppress renters like many mailers. Some of our prospecting sources automatically lend themselves to serving up homeowners due to their source of data. However, the large majority of the prospecting sources at our disposal do not automatically lend themselves to serving up homeowners, therefore we needed to employ a strategy to identify and suppress the renters.
The quickest and easiest strategy at our disposal to identify and suppress renters is to use the USPS to identify what type of building the address is likely to be. The majority of renters will be residing in apartment buildings, which are identified as Multi Family Dwelling Units (MFDU). There are going to be instances where an MFDU is not an apartment, but rather a condominium or town home, which is a bit harder to isolate. By suppressing the MFDU addresses, we are being a bit hypocritical by throwing out the baby with the bath water, or suppressing the good names (condos and town homes) along with the bad names (apartments). As seen in the chart below, the net results justify this practice and give us something to work toward in the future, by trying to identify those pockets of gold within the MFDUs that we are suppressing. It’s evident that up to 20% of the prospecting circulation is determined to be an MFDU and those identified names perform at anywhere between 20% and 50% below average. By suppressing these low performing names, our net performance will automatically increase by a significant margin.
Once we achieved our goal of suppressing the known bad zip codes and the undesirable renters, we realized there was still one more step we could take to help weed out the less desirable names. The answer was to run an optimization on the potential prospect names, but not in the typical fashion where only the rental singles are optimized. Again, we run into the same situation in that there are good names and there are bad names in every source, and prospect multis are not immune to this phenomenon. Therefore in the optimization we ran, everyone (multi or single) was fair game. The caveat to this strategy is that the optimization was not your run-of-the-mill RFM optimization, but rather a custom-built model by an outside vendor. This model takes into account many variables above and beyond typical RFM and also includes house file data, which, taken all together, supplies us with a robust optimization tool to identify names that should not be mailed. This model is run post-merge, which enables us to look at it on an individual name basis, rather than at the list level. Again, we are not marketing to a list; we are marketing to a person.
The optimization model was built to segment our merge output into deciles so we could easily shave off the bottom 10% of our prospecting file. Some may argue that this tactic is wasteful and not cost effective since you have already paid for that name and all of its processing. It’s true that you are throwing away some money, especially if that name was actually paid for and didn’t come from an exchange. Yet it is much more cost effective to spend that $.20 for a name that will be suppressed since it won’t ever generate an order, compared to mailing a catalog for $.65 and achieving the same effect—that of not receiving an order. So by suppressing that name to begin with, you are saving yourself $.45 at the end of the day.
This all sounds fine and dandy on paper, but the proof is in the pudding and the model has to prove itself out in the real world in order to justify the process. Fortunately it did just that, as you can see in the chart below. Each of the lower deciles was re-keyed and mailed under its own source to track the validity of the suppression model. The model performed as expected and was able to identify 10% of our prospecting file that was going to yield minimal response.
Travis Seaton is director of circulation, specialty groups, for San Rafael, CA-based consultancy Lenser.