Why you should measure incremental customer performance

Testing is at the heart of direct marketing — multichannel merchants test e-mail subject lines, catalog cover versions, Website landing pages and search terms. They then analyze results, draw conclusions and adjust plans accordingly.

Typically, testing focuses on improving an existing campaign or channel by measuring sales tied directly to a campaign through attribution or source coding.

But how many companies test the performance of marketing efforts at the customer level?

Looking only at results directly linked to a marketing effort can distort the understanding of a program’s overall impact on company performance. Analyzing the incremental impact of marketing efforts at a customer level can provide more accurate insight into its overall impact on key metrics — orders, revenue, margin and profitability.

Incrementality is a relatively simple process to understand, but it can be challenging to execute.

You start by identifying a group of customers and randomly allocating them to two or more groups, depending on the number of strategies being tested. Random allocation is critical to ensure that other marketing programs or customer attributes don’t affect results.

Each group receives a different contact strategy over a set timeframe. Final measurement should include a pro forma analysis of each customer group that includes margin, marketing costs and profitability.

Revenue curves can vary significantly across test cells, so weekly analysis can be misleading. I recommend reviewing results monthly instead of weekly.

Here’s one example of the insight that incrementality gave into overall customer performance. I had inherited an existing communication strategy that called for sending 18 catalogs to the best customers each year. All of these catalogs were profitable, but over time the productivity of these catalogs had been declining.

By measuring the incremental impact of the catalogs, we discovered that only 9% of the revenue from the last catalog was incremental. So out of every $1 in revenue tied to that catalog, 93 cents would have come in without the catalog.

This had a significant impact of the ROI calculation of that catalog, dropping it from +20% to -676%! As a result of this analysis, we repurposed a significant portion of best customer catalog spend to more marginal — and incremental — performing customer segments and marketing efforts.

Incremental measurement has its strengths and limitations. You have to understand both to determine when and how to use this methodology. Here’s a quick look at some of the pros and cons.

Pros:

  • Clearly identifies the financial impact of a marketing effort. Focusing on customer performance eliminates the risk of campaigns merely shifting revenue from one source to another. This allows for accurate measurement of the value added to the company by the campaign.
  • Does not require managing other contact strategies. If the different contact groups are truly random, the impact of other communications should offset each other.
  • Direct measurement (code capture) is not required. Capturing source codes can be challenging even under the best of circumstances.
  • In certain channels, such as retail, it isn’t a viable tracking option. Incremental measurement tracks the customer, not the campaign, so only capturing customer information is needed to measure results.

Cons:

  • Cannot identify individual customer’s behavior, making modeling difficult. Incrementality measures the differences in performance between two or more groups. Because the differences are what’s being measured, it is not possible to identify the specific customers whose purchases were impacted by the campaign.
  • It’s more difficult to measure with acquisition efforts. You must have accurate customer information to ensure that purchases can be matched to the customer file and contact strategy.
  • In some cases, this is difficult to do in acquisition programs. A comprehensive prospect database can help address this issue.

So how can companies use incrementality to measure the impact of their strategies? Here are a few examples.

  • Contribution of e-mail to overall financial performance, especially in retail environments.
  • Impact of discount coupons on customer behavior and company performance, especially on margin and profitability.
  • Influence of branding or non-transactional marketing campaigns.
  • Measurement of a series of communications on customer performance.
  • How a contact strategy’s impact varies across customer segments.

Incrementality can be a powerful tool, but if results don’t make sense, dig deeper to find out why. In one situation, I was perplexed when control groups were driving more revenue than those mailed additional marketing efforts. These efforts were, in effect, suppressing demand.

It turned out that the customer selection and allocation process was not truly random and had skewed results toward the control group. When the test was run again, performance made more sense. The efforts drove incremental revenue, but not incremental profitability, and were discontinued.

Tim Hoerrner ([email protected]) is chief customer economist at Carmot Marketing, which advises clients on profit-driving customer communication strategies.

Why You Should Measure Incremental Customer Performance

Testing is at the lifeblood of direct marketing—companies test e-mail subject lines, catalog cover versions, Website landing pages and search terms. They then analyze results, draw conclusions and adjust plans accordingly.

Typically, testing focuses on improving an existing campaign or channel by measuring sales tied directly to a campaign through attribution or source coding. But how many companies test the performance of marketing efforts at the customer level?

Looking only at results directly linked to a marketing effort can distort the understanding of a program’s overall impact on company performance. Analyzing the incremental impact of marketing efforts at a customer level can give more accurate insight into its overall impact on key metrics—orders, revenue, margin and profitability.

Incrementality is a relatively simple process to understand, but it can be challenging to execute. You start by identifying a group of customers and randomly allocate them into two or more groups, depending on the number of strategies being tested. Random allocation is critical to ensure that other marketing programs or customer attributes don’t affect results.

Each group receives a different contact strategy over a set timeframe. Final measurement should include a proforma analysis of each customer group that includes margin, marketing costs and profitability.

Revenue curves can vary significantly across test cells, so weekly analysis can be misleading. I recommend reviewing results monthly instead of weekly.

Here’s one example of the insight that incrementality gave into overall customer performance. I had inherited an existing communication strategy that called for sending 18 catalogs to the best customers each year. All of these catalogs were profitable, but over time the productivity of these catalogs had been declining.

By measuring the incremental impact of the catalogs, we discovered that only 9% of the revenue from the last catalog was incremental. So out of every $1 in revenue tied to that catalog, 93 cents would have come in without the catalog.

This had a significant impact of the ROI calculation of that catalog, dropping it from +20% to -676%. As a result of this analysis, we repurposed a significant portion of best customer catalog spend to more marginal—and incremental—performing customer segments and marketing efforts.

Incremental measurement has its strengths and limitations. You have to understand both in order to determine when and how to use this methodology. Here’s a quick look at some of the pros and cons.

Pros:

Clearly identifies the financial impact of a marketing effort. Focusing on customer performance eliminates the risk of campaigns merely shifting revenue from one source to another. This allows for accurate measurement of the value added to the company by the campaign.

Does not require managing other contact strategies. If the different contact groups are truly random then the impact of other communications should offset each other.

Direct measurement (code capture) is not required. Capturing source codes can be challenging even under the best of circumstances.

In certain channels, such as retail, it isn’t a viable tracking option. Incremental measurement tracks the customer, not the campaign, so only capturing customer information is needed to measure results.

Cons:

Cannot identify individual customer’s behavior, making modeling difficult. Incrementality measures the differences in performance between two or more groups. Because the differences are what’s being measured, it is not possible to identify the specific customers whose purchases were impacted by the campaign.

It’s more difficult to measure with acquisition efforts. You must have accurate customer information to ensure that purchases can be matched to the customer file and contact strategy.

In some cases, this is difficult to do in acquisition programs. A comprehensive prospect databases can help address this issue.

So how can companies use incrementality to measure the impact of their strategies? Here are a few examples.

  • Contribution of e-mail to overall financial performance, especially in retail environments.
  • Impact of discount coupons on customer behavior and company performance, especially on margin and profitability.
  • Influence of branding or non-transactional marketing campaigns.
  • Measurement of a series of communications on customer performance.
  • How a contact strategy’s impact varies across customer segments.

Incrementality can be a powerful tool, but if results don’t make sense, dig deeper to find out why. In one situation, I was perplexed when control groups were driving more revenue than those mailed additional marketing efforts. These efforts were, in effect, suppressing demand.

It turned out that the customer selection and allocation process was not truly random and had skewed results towards the control group. When the test was run again, performance made more sense. The efforts drove incremental revenue, but not incremental profitability, and were discontinued.

Tim Hoerrner ([email protected]) is the chief customer economist at Carmot Marketing, which advises clients on profit-driving customer communication strategies.