Every couple of years, we find that we have to revise our “benchmarks” for benchmarks. Order fulfillment practices change far more rapidly today than they did even in the recent past, and we frequently discover that what we measured just a year or two ago is no longer relevant when it comes time to revise a report — a phenomenon that not only is confusing but also makes comparisons difficult.
This has been the case with our performance benchmark research. Originally designed to measure warehouse and contact center practices in terms of order turnaround time, staffing levels, number of SKUs handled, and so forth, our research has had to grow in complexity to include multichannel fulfillment processes. We’ve had to go from answering simple questions such as “How many shipments a day does an operation similar to mine handle?” to dealing with conundrums like “How can I equip my facility to ship both pallets and eaches?” “What defines agent efficiency in a multichannel contact center operation?” and “How do I calculate direct labor fulfillment cost for online and store operations served by the same DC?”
Addressing problems like this, we found, involved changing the initial premise of our research. In our May 2003 benchmark survey, our focus was on presenting a “holistic” view of the operations function by analyzing its various components. This year we turned to the more challenging task of finding out how a multichannel operation — one that, in addition to managing a variety of functions, fulfills orders from a variety of sales venues — measures its performance.
Clicks trump bricks
As it turns out, we were right in assuming that a multichannel operation would be more complex to analyze. Our respondent profile, for instance, has morphed into something markedly different from what it was two years ago. In 2003, 35.4% of our respondents classified themselves as catalogers; this year only 22.6% labeled themselves as such. The percentage of manufacturers responding to the survey has soared to 17.0% from a mere 6.7% two years ago, and the percentage of retailers in the sample has dropped from 16.9% to 11.3%. Perhaps the best indication of how diverse multichannel retailing has grown is the high proportion of respondents who called themselves “other”; 22.0% categorize their primary business as anything from management/media consulting to IT services to direct sales. And maybe because of channel expansion, fewer companies (28.3% this year vs. 35.4% in 2003) classified themselves as midsize — i.e., generating annual sales of $10 million-$49.9 million.
When we asked respondents the number of channels for which their companies fulfill orders, we again received a huge variety of responses. The majority fulfilled orders for at least four channels — brick-and-mortar store, online store, mail order catalog, and field sales — and a fairly significant 12.6% were into such unorthodox venues as educational and financial information, international shipments, consignment sales, and wholesale distribution. Clearly the Internet is king: Online order fulfillment topped the list of channels, with 69.2% performing it, and it reigned supreme as the leading warehouse function — 75% of the 144 respondents who operated distribution centers fulfilled Internet orders.
Of those 144 respondents, 44% ran only one DC. The average facility measured 113,759 sq. ft. and handled 11,132 SKUs a year. Respondents took a slew of warehouse performance measurements, from percentage of returns to quality control to work backlogs, but the five measurements mentioned most frequently were shipping and handling costs (73.6%); direct labor fulfillment costs (72.9%); labor hours to achieve output (70.8%); customer complaints and inquiries (70.1%); and output in terms of lines, cartons, or returns handled (70.1%). Notably, the percentage of respondents measuring S&H costs was down this year: 82.1% had measured this function in 2003. Another odd finding is that in the current survey, fewer respondents assessed the rate of return on investment — only 26.4% reported doing so, compared with 33.9% two years ago. We can only speculate that as the number of channels increases, so does the difficulty of tracking and measuring ROI for the people, processes, and technology that go into them.
Contact centers: Vive la différence!
Another major operational function, contact center management, also turned in some surprising results this year. As might be expected, the Internet was the most common channel served; of the 124 respondents who operated contact center facilities, 79% served online stores. What is unexpected is the speed at which traditional call center measurements have fallen out of favor. In 2003 nearly 80% of the respondents kept tabs on such time-honored performance indicators as call abandonment rates, average speed of answer, and average talk time. Since then the number of respondents measuring those factors has plunged by more than 20 percentage points. (For example, two years ago 79.4% of respondents monitored average speed of answer compared with 56.0% now.)
Absenteeism has long been a contact center bugbear, yet only 49.7% of this year’s respondents cared to measure it, a nosedive from the 75.2% that tracked no-shows in 2003. It’s tempting to speculate that this, too, is the result of multichannel activity — with agents performing many more functions in the contact center and employing a far wider variety of skills than ever before, the conventional measurements may be no longer sufficient to gauge performance.
Similarly, conventional doesn’t cut it when it comes to boosting productivity in multichannel operations. For the first time, packing equipment and materials appeared on the list of the top five productivity boosters. Material handling equipment dropped from fourth to fifth place on that list; customer service moved up from third to second; and sadly, rewards and incentives dropped off altogether. Formerly ranking fifth, with 24.6% of respondents considering them a top productivity booster in 2003, rewards and incentives were cited by just 17.0% this year as a way to enhance productivity.
Just how to quantify productivity was a tricky issue; we decided to define it as the degree of improvement obtained when certain technologies or processes were put in place. Using that yardstick, contact center systems were the runaway victor this year, upping productivity by 25.6%, vs. 17.0% in 2003. Applications such as warehouse management and transport management systems didn’t perform as well on the productivity front; they topped the list two years ago, boosting output by 35.5%, but in 2005 that number declined to 23.7%.
Customer service rated third (23.2%) in terms of enhancemed productivity this year. And not only did training boost productivity, but training productivity itself grew. Not a major factor in 2003, this year training productivity was reported to have improved by 22.7% overall.
None of these statistics are relevant if you don’t measure them often enough, so we’re encouraged to see that more companies today monitor performance data on a weekly basis — 39.0%, compared to 34.3% in 2003. And another 18.9% measure performance even more frequently.
What they do with that data is another matter. In 2003, more respondents used benchmarking data to improve training (80.9% vs. 61.0% today) and reward employees (61.2% vs. 47.2% currently). They were also stricter about setting performance goals based on the benchmarks — 70.8% did so for warehouse and call center facilities and 64.6% for individual workers, vs. 62.3% and 54.7% this year.
Of course, there’s more than one way to deal with benchmarking data. To our question “What does your organization do with the measurements that it collects?” one respondent replied, “Gets depressed!”
|Shipping and handling costs||73.6%|
|Direct labor fulfillment cost||72.9|
|Labor hours to achieve output||70.8|
|Output (lines, cartons, returns handled)||70.1|
|Note: Base = respondents with DCs; multiple answers|
|% of 144 respondents|
|Average talk time||57.2%|
|Call abandonment rate||56.6|
|Average speed of answer||56.0|
|Average time in queue||55.3|
|Note: Multiple answers|
|% of 159 respondents|
|Material handling equipment||20.1|
|Note: Multiple answers|
|% of 159 respondents|
8 Guidelines for Research
Managing your various sales channels may be difficult, but measuring their performance doesn’t have to be — if you follow proven research methods. The eight hallmarks of reliable measurement described below, drawn from research textbooks and industry experts, offer systematic ways to evaluate people, processes, and functions in your operations.
- Data quality
You can’t go wrong if your research meets the three venerable standards of validity, reliability, and generalizability. The research instrument must measure what it is supposed to measure; it must provide the same results on different occasions; and the patterns it detects in a sample must be present in the wider population from which it is drawn.
- Data analysis
Assuming that your research meets the above criteria (and stop right here if it doesn’t!), you begin to analyze the information you have collected. Remember that correlation doesn’t necessarily imply cause and effect, and pay meticulous attention to interpreting the data correctly. In the Hawthorne Studies conducted at General Electric Corp. in the 1980s, the researchers mistakenly attributed low productivity to the physical environment rather than to group processes. As a result, the company implemented changes that did not have the desired result.
Are you looking at enough research data? Is there too much information? Too little?
It isn’t enough if your sample is statistically valid year to year. All the conditions under which you measure it should remain the same during the period of the research. Even changes in the weather, lighting conditions, and decor can affect the results.
Yes, you’ve captured the history of every transaction ever recorded. But have you factored in the contributions of temporary or third-party services? How about interdepartmental services?
Your data must reflect everything that has taken place in the business — mergers, alliances, changes of ownership, SKUs added, products removed, and so forth.
If your study is intellectually appealing but has no immediate practical application, drop it! Your primary objective is to generate a higher ROI, so conduct only research that will further this goal. Also, ask yourself how long the survey will take. Many studies drag on for years, leaving major problems unresolved in the meantime.
Your CFO will grill you enough on this, so we’ll let you off with a warning: Make sure you can cost-justify every penny you spend on the research. It makes no sense to spend tens of thousands of dollars on a survey if the savings that result are not significant.
|Contact center systems||25.6|
|Note: Total respondents = 95; multiple answers|
On Feb. 1, 2005, Primedia Business Marketing Research e-mailed a 16-question survey to 3,613 O+F subscribers selected on an nth-name basis. Respondents were offered a chance to be entered into a drawing for one of four $50 Amazon.com gift certificates. A second e-mail was sent on Feb. 8 and a third on Feb. 15. By Feb. 22, 159 usable surveys were received, for a response rate of 5.3%. Results were reported in three categories: companies with annual sales of less than $10 million, between $10 million and $49.9 million, and of $50 million or more. In addition to warehousing, operations, and fulfillment managers, the respondents included managers of contact centers, IT, facilities, e-commerce, finance, and HR. Our survey was based on measures developed by Ron Hounsell, director of logistics services at Cadre Technologies, a distribution and fulfillment applications provider in Denver. We also used benchmarks published by the Council of Supply Chain Management Professionals (formerly the Council of Logistics Management), along with research methods specified in standard statistics texts. To purchase a copy of the complete study, visit http://multichannelmerchant.com/research/#multichannel