Testing, Testing 1,2,3

Nov 01, 2003 10:30 PM  By

Did you know that testing elements of your catalog — creative, offers, promotions, segments, timing, covers — is the most cost-effective way to improve sales and profits? Testing isn’t a luxury; it is an integral part of a direct marketing business. Information learned from testing affects marketing, merchandising, and creative communications with customers. Successful direct sellers know the value of testing, the value of learning the type of creative that influences customer behavior, and the value of identifying which strategies work best for each customer segment.

You can find opportunities to test catalog creative and marketing within your customer database. When analyzing the customer transaction data, you should ask questions such as “How can we motivate first-time customers to make another purchase?” or “How can we entice customers to buy more than one item?” or “What can we do to attract more new customers?”

There are many options worth testing when trying to motivate a desired response. Here are a few:

Reactivation Catalogers know it is generally less expensive to reactivate customers who haven’t made a purchase in more than a year than it is to prospect for new ones. The intent of identifying inactive customers is to increase their response rate; the test ideas need to drive response. If reactivation is your goal, test offers such as reduced or free shipping and handling; cover messages; personalization; sale inserts; or product offers specific to an audience segment (for instance, shoes to previous footwear buyers).

Average order value Your best buyers will have a much higher average order value (AOV) than your next-best customer segment or the customers identified as first-time buyers. With this in mind, review the percentage increase you are trying to elicit from those segments. Be careful, however, that you do not create an offer that “asks” too much of them. For example, if a customer segment already has a $65 AOV, testing a $100 threshold may be too much of a reach (53.8% increase) and could create a negative response. Testing an offer with a lower threshold, such as $75 (15% increase), is more likely to work. Keep in mind that unique target audiences respond differently to offers. What works best for one company may not succeed for another. Indeed, this is why testing is the only way to decipher what works best for your customers.

Prospects Acquiring customers is one of the most expensive business propositions and one of the critical tasks in growing a catalog business. Identifying successful creative treatments and marketing messages are ongoing endeavors. Features to test in order to increase response include cover creative, promotional messages, offers, and formats. Testing messages or creative treatments to support seasonal spending (such as holidays and back-to-school) or corporate budgeting times (the end of the fiscal year) works well for consumer and business-to-business catalogers alike.

Multichannel marketing To increase overall revenue, catalogers need to understand customers’ propensity to shop each channel. Tests can include coupons, discounts, and special offers by channel. B-to-b catalogers should also consider such factors as overall corporate spending, product life cycle, seasonal budgeting, and the decision process (targeting the gatekeeper vs. the influencer, for instance) in their testing environment.

The eleven keys to testing success

Each organization approaches testing differently and has different philosophies regarding testing. Yet one thing remains constant — how to test.

Testing doesn’t need to be complex or difficult. To help ensure the reliability and validity of testing as well as to reduce the number of circumstances that negate the test, here is a checklist of 11 points for success:

  1. Test only what you are willing to roll out.

    For instance, if your company’s philosophy is not to use introductory or sale prices on prospects, don’t bother testing them. You must also review the costs associated with rollout as well as with testing. An example of high costs associated with the rollout is inserting trial-size samples of a particular product via bind-in cards. On a limited basis, the expense is manageable, but offering samples to larger segments may be cost-prohibitive.

  2. Live by the rule of 100.

    For consumer catalogers a good rule of thumb to ensure statistical significance is 100 orders. If you have a test segment of 10,000, and typically this group generates a 4% response rate (400 orders for 10,000 catalogs mailed), this test is valid. If a test group of 5,000 is expected to generate a 1% response rate (or 50 orders), you should increase the test quantity to 10,000 to obtain at least 100 orders.

  3. Be sure that the offer can be integrated into the call center and order entry systems.

    Special offers, tests, and promotions are often brilliantly communicated on the mail piece or via e-mail, but logistically the test becomes a nightmare. Before setting the objectives for the test, you should solicit the guidance of the management of the call center (those who are speaking directly to the customer) and the distribution center (those who actually process and fulfill the orders). Find out if the order entry system accepts “percent off” promotions or “dollars off” promotions. Some systems can process only dollar savings, not percentages (and vice versa). If you’re offering free gifts, don’t forget that they need to be shipped to customers. Ensure that the operations people know about the offer, are able to inventory the gift merchandise in the warehouse, and can include the products with the outbound order. Nothing is worse than having the wrong type or size of packaging and having to spend extra money to ship gifts to customers.

  4. Conduct a back test, also known as a reverse test.

    Suppose a test proved successful last year, and the rollout of the offer is this year. A back test is the process of holding a control group (adhering to the rule of 100) that will not receive the rollout and then monitoring the results. A back test provides the insight to the rollout performance: Is the offer response rate from last year holding true, and are the results consistent with those of the test? This back test also maintains a baseline to measure performance. Say a test generated a 25% lift in response rate, with response rates increasing from 4% to 5%. The actual performance this year on the rollout, however, was 4.5%. The rollout may appear to be a failure. But using the back-test segment, you see that the response rate for the baseline was 3%. Calculating the difference between the two segments shows that the rollout actually outperformed the back test by 50% — not a failure after all.

  5. Identify and maintain controls.

    With any test strategy, clearly identifying a control segment (customers) and a control format (creative) is important for measuring the test results. The influences of recency, product affinity, loyalty or club status, multichannel shopping, and use of a proprietary credit card may skew response. Carefully — and randomly — selecting the test and control segments help to compensate for those factors. For creative, the control is defined as the base catalog — the catalog with no offer. Do not make the common mistake of testing two offers against each other (for instance, free shipping vs. a gift with purchase) without a base catalog. In this case, proper analysis includes the development of three test groups: the control (no offer), the test of free shipping, and the test of free gift with purchase. The results are evaluated against the control of no offer.

  6. Follow through with the offer or promotion.

    Two of the most common errors are not updating the Website to accommodate the offer, and not telling the call center about the offer. Be sure the Website prompts for the correct codes, uses pop-ups to indicate offer eligibility, and has any and all correct calculations, as these affect customer acceptance. The call center needs details of the offer, the correct source codes, and appropriate default codes to accommodate customer requests.

  7. Measure the results campaign to campaign.

    Tests should be rolled out in the same period as last year with the same format as last year. Knowing that a test worked very well in the fall season does not ensure that it will work the same way in the spring season. Offers communicated on a postcard cannot be transferred to a dot whack on the cover without a change in response. Measuring seasonality, timing, medium, segments, and channel are important factors to consider when testing.

  8. Record and learn.

    Develop a testing library with samples, segments, results, and observations. Don’t let the word “library” scare you: It can be nothing more than binders with the pertinent information kept on a reserved shelf. Too often tests are conducted, but the information is then misplaced or not recorded. How many times has someone in the organization said, “We’ve tried that,” but no one can find the sample or the data? Also, with a process in place to record and archive the information, personnel changes do not interrupt the record keeping.

  9. Watch over time.

    Monitoring performance will enable you to identify early symptoms of offer fatigue. It can also alert you to the possibility that customers might be more receptive to offers that had failed years ago.

  10. Allocate costs correctly.

    It’s easy to forget to include the financial impact of the test. The costs from bindery, printing, and creative are readily noticeable because they occur prior to the mailing and are often visible on an invoice. The costs most frequently omitted are the impact to the profit and loss statement or the breakeven calculation. The reductions in gross margin or increases to fulfillment charges are two of the most commonly overlooked expenses.

  11. Know the gains you need.

    As you contemplate a new test, make certain to understand the gain in response percentage and AOV or sales per catalog that you need to pay for the test. Let’s say you are testing personalization on the cover (vs. the control of no personalization). You need to understand the rollout or full-scale cost of a personalized cover and apply that additional printing cost per thousand to your breakeven. If the breakeven indicates that you need a 50% gain in sales per catalog to pay for personalization, kill the test! It won’t happen. But if a gain of 7% is required, go for it. Personalization could easily achieve that gain. Use the old “reality check” in deciding whether a test makes economic sense.

Creative elements

The creative implications defined within the testing strategy are integral to the testing process. Communicating the test objectives to the creative team is the prerequisite to communicating the offer to customers. The creative staff can work best if they are aware of the different tests and are able to effectively design each component in concert with the rest of the creative.

The role of the creative platform is to bring the test strategy to implementation. Design and copy techniques make the offer compelling and relevant. If the creative isn’t significant, then don’t bother with the test. Too often catalogers want to produce creative that is esthetically pleasing but does not motivate the reader or communicate the promotional offer.

One of the roles of the catalog cover is to convey the offer. The same rule applies to the retail storefront or window, the landing page of a Website, the postcard, the bind-in card, the insert: Convey the offer. Techniques with contrasting colors, persuasive words, immediate recognition of the offer, and prominent placement have been proven to do just that.

When it comes to testing, cover presentations are your biggest opportunity to change customer behavior. In terms of testing offers, always place the message on the cover, where a customer or prospect has the best chance to see it. Burying a message inside the book has little chance for discovery — especially by prospects, who may never open the catalog.

The method in which you position and design the offer is also critical. The message should be short and succinct, and it should stand out from the other visuals. Otherwise, it will not get noticed. Testing on the covers, front and back, also provides the easiest and most cost-effective way to version your offers and segments when it comes to production and bindery.

Testing creative is never an easy task. But testing creative on your covers is not only affordable and easy to manage, but it also provides you with insight regarding what images will best motivate your readers to action. Types of tests include lifestyle vs. product, one category of products vs. another, and a group of products vs. a single product. To get a clean read on this type of test, be sure that the images are very different in their presentation.

One last piece of advice: Too many offers, versions, messages, segments, promotions, and inserts can produce unreadable analysis. The creative and production logistics of managing the tests as well as keeping the entire customer segmentation valid is overwhelming and often inaccurate.

Tests are important business objectives. Consistently challenging the status quo, continually evaluating customers’ buying preferences, and regularly testing the creative and marketing of every promotion provides the most practical road map for improving profitability.

Lois Boyle is president/chief creative officer and Gina Valentino is vice president/general manager of J. Schmid & Associates, a Shawnee Mission, KS-based catalog consulting firm.

Six Test Spoilers

  1. Not allocating responses appropriately causes misinterpretation of the data. Remember to distribute unknown source codes (a.k.a. default or no codes) across all responses. Also set up separate codes for each order channel to help track responses.
  2. Mother Nature, Wall Street, and uncontrollable calamities that affect a mailing cannot be avoided. Nonetheless, they will lead to unreliable results, making the test invalid.
  3. Fewer than 100 responses negates the statistical significance. The test quantity is too small or the response rate too low.
  4. When offers are nondescript and fade into the background, customers are not compelled to read or act upon the information.
  5. Changing the tested components in a rollout will not guarantee forecasted results. Specific elements of the creative were tested and performance measured purposefully. Modifying the creative in any way annuls what was previously learned.
  6. Forgetting to include “the fine print” with expiration date, eligibility, exclusions, and other legal information necessary to clarify the specifics of the offer is akin to forgetting any other element of the test.
    — LB/GV