As marketers, we may be tempted to think that technology can solve all of our marketing problems — practically to the point of doing our thinking for us. And considering how sophisticated today’s predictive and descriptive models are, it’s easy to view them as a panacea that automatically guarantees the results we are looking for. • But a model alone is not foolproof. Indeed, if you don’t fully understand how and in which cases they work, you may use them incorrectly and get disappointing results. • The fact is, although the evolution of these sophisticated models has enabled marketers to make strategic — and profitable — decisions, mailing entire lists based on blind faith in the model alone can be counterproductive. To model successfully, you must also use the fine art of human analysis. What’s more, models are not always necessary for successful marketing (and keep in mind, this last statement is coming from someone whose livelihood depends on developing successful models!). • That said, let’s move on to the facts of modeling — or more specifically, what to bear in mind to achieve modeling success.
- You may not need a model.
Before plunging ahead, consider some basic criteria to justify your use of a model. Contrary to popular belief, simply having large amounts of data to interpret is not a sufficient reason to build and use a model.
Many marketers jump to the conclusion that they need a model for at least two reasons. One, modeling is fashionable, and no “happening” marketer wants to be without one. Two, marketers often think of a model as the fastest and easiest way to weight and sort through data — to look at detailed customer or prospect information such as the time since last purchase, how much the customer spent, and how many times the customer shopped in the past twelve months.
Before giving in to either of those temptations, consider your goals and how, specifically, you plan to use a model to achieve them. You should determine that the use of a model is applicable and cost-effective. Sometimes the potential revenue increase to be gained by using a model doesn’t justify the cost.
For example, most models cost in the neighborhood of $30,000-$60,000, so it may not be sensible to use a model when the campaign you are planning is supposed to make only $20,000. Indeed, if you are not going to make more than $100,000 on the campaign, you should probably think twice about using a model. By choosing another method, you may ultimately decrease your revenue results by, let’s say, 20%, but end up being more profitable.
If you don’t stand to benefit from modeling, it may be more cost-effective to sacrifice targeted results and instead use judgmental criteria — a mix of industry knowledge and analysis for determining and matching prospects and offers. Say you are doing a promotion on Italian gloves. You could use judgmental criteria to find customers who live in a certain region, bought Italian handbags from a general catalog, and have spent more than $100 from a catalog in the past six months, to send the offer to. And although a model could better weigh the relative importance of a variety of factors, you could use the judgmental criteria alone as an effective means to select targets.
What’s more, if you plan to use a model just once, it may not be worth the expense and effort. Let’s say a music video cataloger uses the same models over and over again to promote mini-catalogs. Then it has a one-time-only creative opportunity: to present a mini-catalog promotion based on a tie-in with an Elton John/Billy Joel tour. Since this event is unlikely to recur, a model designed for one-time use would be costly and inefficient. Instead, the cataloger should tap the expertise and knowledge its modelers have acquired in building the other mini-catalog promotions to make an educated selection of the best population to send this catalog to — without using a model.
By the same token, you shouldn’t build a model based on a strictly seasonal promotion, such as holiday. Since economic factors affect the outcome of every holiday season, the model wouldn’t be applicable for more than one year, and it would therefore be hard to justify the cost.
- Constantly update your model.
Unfortunately, many catalogers wait so long to build a new model that the competitive landscape has changed completely by the time the model is finished, rendering it ineffective. So take a proactive approach and update models frequently to keep up with such changes. (And while it seems like a no-brainer, many a cataloger has switched from mailing one type of promotion to another without updating or building a relevant model.)
You should include a small control group with each mailing to help determine when it’s time to update your model. The minimum number to be included in the test group depends on the response rate you are looking for. If you typically get a 10% response rate and want at least 100 responders, you would have to mail at least 1,000. Generally, though, the test should include a minimum of a few hundred with each mailing.
Let’s say you rank the consumers most likely to respond with three-digit scores, establishing 400 as your break-even point. You could then test-mail consumers with scores above and below this break-even point. If you find that those consumers the model predicted to have high responses aren’t responding and vice versa, then the model isn’t working.
Depending on your marketplace, data can change rapidly; an approach that is right today might not apply tomorrow. New trends can pop up overnight. If your model is based on historic data that will no longer predict future behavior, it’s useless. Major economic upheaval or a shift in the competition are two key reasons a model becomes obsolete.
Say you’re mailing a promotional coupon offer and getting great response rates. Suddenly your competitor introduces a similar promotion. Your model is no longer relevant, because those consumers the model has identified as high responders may now be swayed by the competition. In this case, you could reweight the coefficients, or relative values of the characteristics. Or you could rebuild the model from scratch to include the characteristics that are now predictive, taking the copycat promo into account.
But while many of these variables — products, prices, competition — are constantly changing, don’t let this dissuade you from modeling altogether. Modeling makes sense, since some of these traits remain fairly stable — at least for a given period of time. With the help of a good statistician, you can figure out which variables are likely to remain predictive and which are likely to change.
For example, bankruptcies will change the economic landscape and hence your model. But by the time you notice bankruptcies as a trend, it may be too late to change the model. So modelers instead look for signs that predict bankruptcies, such as a decline in the average income. New approaches to modeling also enable rapid updating to the weights of your characteristics to keep up with your changing world.
- Use experience to customize models.
Rather than pulling modeling software off the shelf or simply using a model comparable to those of your competitors, you could gain a competitive advantage by working as an artist, applying your unique business knowledge to customize the model and enhance the technology.
A successful modeler not only understands the data but also is able to use his or her industry knowledge to interpret the data. After all, data can sometimes be misleading. Indeed, software models may often pick up a variable that doesn’t make sense or may be an anomaly. You need a human modeler to screen it out.
Say you have a catalog that specializes in toys for children ages 5-12. You rely on a model to target your mailings, since the catalog is expensive to produce and reaches a niche audience. Traditionally, the model has worked and has accurately shown that Illinois is your biggest market. Then suddenly, Zany Brainy decides to concentrate on building market share in Chicago. The company not only opens new stores but also runs massive ads and offers discount coupons.
If you were simply to look at your mailing results in comparison to your model, without having this industry knowledge, you may falsely conclude that the model is a failure and decide to rebuild it. But what you should do is use your industry experience to weigh the long-term effect Zany Brainy’s marketing blitz will have on your model. You may instead opt to build a special model for Chicago or to wait out the marketing frenzy for now, instead increasing mailings outside of Chicago. Indeed, once the marketing campaign settles down, you may find that it doesn’t have any impact at all; without a coupon incentive, your buyers may prefer to shop from you. But without using your judgment, you may have gone ahead and wasted money rebuilding a model.
- Look past the obvious.
In developing a model, many catalogers make the common mistake of focusing on an easily measured business indicator, such as response. This strategy is commonly called “body count” marketing, as it refers to the number of bodies responding to an offer.
Instead, look at response and performance — how much these respondents spend — together. To do so, you and a modeler would want to develop response models (identifying likelihood to respond to an offer) and behavior models (identifying, for example, likelihood to spend a given amount) to be used together.
A marketer who worked only with a model designed to target responders might enjoy some measure of success in terms of revenue generation. But without a more robust model that also measured prospects’ likelihood to spend, the marketer would likely suffer several drawbacks.
First, the marketer would have to incur the additional cost of mailing to a larger number of customers than necessary to achieve the desired results. In addition, the offer itself would be less targeted and therefore less effective with those responding customers who are likely to spend. And in some cases, a less targeted offer might even alienate and deter those best prospects from making future purchases.
- Be specific about your objectives.
Suppose that in addition to looking for the responders who have the highest average orders, you want to measure the success of your campaign by profitability as opposed to total sales per catalog. Now we are talking about a different model, one that must also analyze data characteristics that would predict overall profit. It is critical that you communicate with a modeler all the specific factors that affect your profitability — for example, likelihood of cancellations, returned items, and bad payments. You may target revenue generation for individual products, since this is an obvious indicator of success. But it may turn out that the top-selling products are also highly correlated with the highest returns. Once all factors are incorporated in the model, you may find that other products are actually more profitable in the scheme of things.
Regarding amount of purchase, you should be aware that in addition to odds-ratio models — those that predict the likelihood of an outcome such as response — you can take advantage of continuous outcome modeling: This uses models developed to predict relative numerical measures from a population — in other words, to tell you how much a customer is likely to spend in relation to other customers.
- Evaluate the model’s performance.
You should formally review your models every six months. In addition, as mentioned earlier, you should assess your models’ performance on an ongoing basis, with each mailing.
Keep in mind that some time can elapse between model design/launch and evaluation. Given such potential factors as personnel changes and shifting corporate objectives, the desired goal and actual performance evaluation may get out of synch. Evaluating a model’s performance based on inappropriate model objectives can lead to wasted effort targeting processes that never improve or to additional costs from model rebuilds.
Often models are wrongly blamed for fallout from an ineffective or poorly executed marketing strategy. When a program isn’t working, it may be that the model isn’t accurately predicting the defined behavior for the designated customers, but it could also be that the marketing program wrongly targeted these customers to begin with or that the wrong strategy/offer was employed.
Ann Abrahamson is director, solutions management and marketing, for San Rafael, CA-based Fair, Isaac’s Global Retail Market.