Database models tend to be remarkably resistant to nondramatic changes in creative and price, says Jim Wheaton, a principal in Chapel Hill, NC-based Wheaton Consulting Group. Therefore, as long as the fundamentals of your business remain reasonably stable and there is no change in the structure of the source data, models are likely to retain their potency for years.
Should changes occur externally, however (shifts in the underlying structure of the source data, for instance), or within your business (if you’ve changed your merchandise mix, say), you need to alter the model. “Models extrapolate from the past to the future, based on an assumption of environmental constancy,” Wheaton explains. “When there is a disruption in constancy, extrapolations become problematic.”
Fortunately, he says, there is a way to determine the likelihood that model performance will deteriorate: “Every time a model is scored in a production environment, profiles should be run on each segment. These profiles should include averages and, optionally, distributions for every one of the model’s predictor variables. They should also include whatever RFM or demographic elements are helpful for painting a picture of the best customers vs. the worst, as well as those in between.”
These profiles should not diverge significantly from those run off previous successful mailings, nor from profiles run off the original data set used to validate the model. “The extent to which divergence has occurred is the extent to which model deterioration is likely to be encountered,” Wheaton says. “Sudden, dramatic divergence generally is the result of a change in the structure of the source data. Gradual divergence often is symptomatic of a change in the dynamics of the business.”