Beyond the basic model

May 01, 2006 9:30 PM  By

Modeling has come a long way beyond the good old recency/frequency/monetary value (RFM). Thanks in no small part to the rise of cooperative databases, the types of models and the variables available have become more numerous and sophisticated.

But like the practice of medicine, database modeling is as much an art as a science. The practitioners continue to fine-tune the data and the interactions among them to create more-effective means of boosting response.

Above all, that requires never losing sight of the purpose of the model. “The whole point of modeling is to mirror the client’s customer. That aspect helps create a manageable universe by pulling the best of the best for prospecting,” says Lori Collins, corporate vice president of Hackensack, NJ-based list firm Focus USA. “Without the statistics behind it, there’s a question of picking the right people. The higher the score, the higher the chance they will respond or be converted.”

For his part, Chris Montana, senior vice president with Hackensack, NJ-based list marketing services firm Mokrynskidirect, says his company does not see as many clients requesting modeling to find niche customers. “Models aren’t being used to cherry-pick from a smaller universe, Montana says, “but to expand beyond what the direct marketer already has in its house files.”

Catalogers are using models more to predict and sharpen the performance of marginal lists and second-tier selects, he says. Direct marketers know what works with the first-tier customers, but while growing their circulations, they are willing to venture into the gray area of their models for incremental growth.

Zip it up

Focus USA’s Collins is a proponent of going beyond the zip code level to zip+4 credit data. The information is a mirror image of sorts of the 300-plus credit variables associated with every consumer that are housed at the big-three credit bureaus, Equifax, Experian, and TransUnion, but is not the actual credit data. Rather, the information is aggregated at the zip+4 level.

Because the data points are aggregated, the file is not subject to the Federal Trade Commission’s Fair Credit Reporting Act (FCRA) passed in 2002. Concerns about the FCRA and potential security breaches, says Focus USA’s Collins, had led many services providers to err on the side of caution and stop offering zip+4 modeling for a few years. But during the past year and a half, zip+4 credit information has slowly made its way back to the market, thanks to the certainty that real credit data are not being used.

Collins considers zip+4 data to be the “perfect surrogate to true credit data.” Because each nine-digit zip+4 code represents no more than 10 households, “the data level is small enough where it’s still viable but doesn’t look at a person at a personal level and can still help a marketer determine the best people for the offer,” she says.

Direct marketers use zip+4 models to eliminate certain streets within a zip code that may not be as socioeconomically desirable as others.

“You can look out your front door and see your neighbors, and you will know that they are most likely in the same socioeconomic state as you are,” Collins says. “But if you’re making the same offer to people in the next town over, or even the next block over, you can muddy the water, and your deal could be irrelevant.”

Making the most of what you’ve got

Jim Coogan, president of Santa Fe, NM-based consultancy Catalog Marketing Economics, says that one major modeling focus now is to find customers in the older segments of buyer files that could be reactivated.

“Let’s say you have 200,000 names in your buyer file that you’ve stopped mailing to because they haven’t made a purchase from your catalog in three years,” says Coogan. “You can optimize that with a co-op list, find out which customers are shopping with the competition, and identify 10%-15% that could be reactivated.” The co-op file would be overlaid to the inactive customer list to help identify those who had made recent purchases in the same product categories, the number of recent purchases, and the dollar value they represent.

Another hot modeling strategy, according to Coogan, is to take your list of Web-only buyers who have never received a catalog, model those names at the co-ops, see which ones are active mail order purchasers, and determine if you can gain additional purchases and incremental sales by mailing them your catalog — or if you can cut back your mailings without reducing sales.

“It’s a good method to make sure the catalog is driving your e-commerce,” Coogan says. “And if they find these consumers are ordering only from the Web, catalogers are finding out they don’t have to send as many catalogs. They know they can cut from 10 to two catalogs a year to those customers because they know they are Web-driven.”

Using the co-ops for these sorts of models typically costs $40/M-$70/M. “The economics are pretty compelling,” Coogan says. “If you can find good names, it will be less than half the cost to prospect for them with a list rental. And if you’re suppressing names as a result, it’s even more cost-effective, because the alternative is spending 75 cents to mail the catalog. The cost of picking the right names is so much more efficient than spending money on printing, paper, and postage.”

Maximizing new media

In this increasingly multichannel world, additional channels such as TV advertising need to be incorporated into models as well, says Steve Briley, vice president of analytical services for Denver-based Merkle, a database marketing company. To that end, Merkle has created a modeling technique called Media Mix, designed to help marketers understand the impact of direct marketing if you were to remove other media, such as television and radio, from the mix.

“Once they see how media affects sales, marketers can see what happens if they increase one media by a percent and decrease another,” Briley says. Many multichannel merchants will launch a campaign and, using marketing deciles, measure the cost per sale, then attribute all the revenue to direct mail, he explains, even if infomercials and print ads are driving much of the contact center volume and Website traffic.

Media Mix models measure the complex statistical relationship between overall results for a brand or multiple brands and multiple media and market activities over an extended period of time. “Which model to use depends on how many brands are being modeled, what correlation structure between products and predictive variables and lag data structures exist — that is, how long does a point-in-time event or promotional spend impact current and future sales or responses — and for how long,” Briley says.

The data used may include target brand metrics, such as responses, market share, and revenue; predictor information, which reflects brand promotions across multiple media, including television/cable, radio, direct, interactive, events, print, and inserts; competitor data, which includes other merchants, if available, and other factors that vary by industry, such as promotional spend and pricing; company-specific data, such as sales force size and product launches; and environmental data, which are national factors measured by the consumer price index or unemployment rates, along with regional events.

“This is a lot of information, but ignoring overarching environmental factors like declining housing or employment dynamics causes measurements of effects due to promotions or events to be inaccurate and possibly distort what the true optimal media mix solution is,” explains Briley. “Using external data like this is a best practice that is not always used by practitioners in this space.”

Briley says these models indirectly lead to improved sales and responses by providing improved marketing budget allocations for future years. “In an optimized media mix resource allocation scenario, revising future budgeting to, say, decrease television/cable spend by 10% and increase direct marketing by 30% could lead to better sales performance,” Briley says.

A typical timeframe to build a Media Mix model is 6-10 weeks, depending on the industry, the availability of data, and the complexity of the media mix solution. The cost is built into Merkle’s consulting fees.

Lower modeling costs

One thing that had kept some marketers from using models was the set-up cost, says Susan Darling, a vice president with Mokrynskidirect. While response models were ideal for direct mailers who ship to a million or more prospects, the set-up costs, which could run to $20,000, meant they weren’t cost-effective for a merchant mailing to, say, 100,000 names. Montana adds, though, that users can now request a no-cost model to save on front-end costs, but they will end up “paying anywhere from $15/M to $20/M on the back-end costs” of selecting names and running the model.

So far, Darling says, “about a dozen or so of our clients have been testing the new response modeling tactic, and they say they have been working very well.” Their response rates have increased as much as 25%.

“It’s really just another tool for them to find profitable names,” she adds. “Lists have been shrinking over past few years, response rates falling, and it’s tough for catalogers to find names that work. In the past, only a very large mailer could afford response model lists, but this opens whole new door.”

Quick tip

A relatively easy way to hone your basic cooperative database model, according to Jim Coogan, president of Santa Fe, NM-based consultancy Catalog Marketing Economics, is to narrow down the list of catalogs whose names you want included. Often models grab many names from catalogs whose buyers that are not going to be compatible with your prospects, though they have shown a tendency to be a catalog or multichannel buyer.

“Build your own list of catalogs that you should be prospecting to, and ask the co-op to give you just these catalogs,” Coogan says. “Telling the co-ops which catalogs are most like your catalog really helps them build models that focus on the most likely prospects.”
TP