Customer Service

DR. JON ANTON
Director, Benchmark Research
CCDQ, Purdue University

We all have our own stories of customer service nightmares. Our favorite is the time our editor waited for more than ten minutes at a counter while the five sales associates behind it bantered with one another, ignoring her completely — and she was the only customer in the store! But that kind of uncouth behavior is meeting with less and less tolerance from customers, and retailers finally appear to be paying attention. In a recent survey of 176 companies conducted by Forrester Research Inc., 83% of respondents said that increasing customer satisfaction would be an important priority this year, and that they planned to enhance customer-facing activities. Contact centers, often the first point of interaction for most customers, set the tone for the entire shopping experience, and face the heavy burden of providing stellar service while maintaining high agent productivity and motivation. To find out how the best do it, O+F talked with Jon Anton, Ph.D., director of benchmark research at Purdue University’s Center for Customer-Driven Quality.

Have there been changes over the past year in the ways companies approach customer service benchmarks?

Yes. In the past benchmarking was highly unscientific. Mainly, individual managers would visit other call centers and discuss and compare “people, process, and technology.” Today, we at Purdue University have pioneered the concept of statistical benchmarking — namely, gathering performance data from thousands of customer service contact centers, and then using analytics to scientifically and statistically determine best practices in all the processes that support customer contact handling.

What are companies looking for?

The members of our International Benchmarking Community are primarily looking for best-practice information in all aspects of operating customer service contact centers, such as those centers that handle calls, e-mails, Web self-service, and Web chat.

How do they use the benchmarks?

Benchmarks are used to establish internal performance goals. We present to the International Benchmarking Community members the best-in-class performance metrics that their “peer group” of contact centers is able to obtain. Sample performance metrics include contacts handled per agent per shift, average call handle time, average speed of answer, average number of transfers, and about 30 others.

Are there specific customer service or customer relationship management (CRM) benchmarks that IBC members consider of prime importance?

Yes. CRM and customer service benchmarks of prime importance include:

  • caller satisfaction;
  • average speed of answer;
  • average time in queue;
  • average number of transfers;
  • average time on hold; and
  • percentage of calls handled on the first call.

In addition, there are about ten other benchmarks.

Have any of these benchmark measurements changed in recent years?

CRM has focused companies on making their customer service contact handling more customer-centric. This means a much higher focus on metrics that are observable by the customer. As indicated in the previous question, these are metrics that the caller can actually use to “compare” your performance with their own expectations, which are based primarily, and unscientifically, on the customer’s experience in contacting other centers.

Are there benchmarks of more interest to specific sectors such as direct-to-customer retailers, fulfillment services, or parcel carriers?

Definitely. Each internal process of a company that is part of the end-to-end customer experience should be benchmarked separately, and each process has its own benchmarks. The telephone call, e-mail, and/or Web visit is only one of many, many processes that can and should be benchmarked. At Purdue, we are quite involved in the concept of “customer experience” benchmarking.”

Has there been a marked increase or decrease in interest in either particular benchmarks, or benchmarks in general?

Benchmarking began slowly and is not accelerating quickly. Activities like [the quality improvement program known as] Six Sigma, as well as the Purdue University certificate known as “Center of Excellence,” have gotten management’s attention to the fact that benchmark data should be part of the DNA of a company’s reporting. No longer is it enough to compare your performance with your own performance, one month to the next or even one year to the next. It is now mandatory that you add a comparison of your performance to a peer group of similar customer service operations in other companies.

What trends, if any, are you seeing or expect to see develop in the future?

Benchmarking will become a quarterly event not too dissimilar to preparing the company’s financial reporting on a monthly and/or quarterly basis. We are offering a “Direct Connect” program whereby a customer service manager can upload performance statistics to the Purdue database. We then download automatically aggregated peer group performance data to our Community member. This makes benchmarking a continuous process.

To what economic or other causes would you attribute any of the above?

Top-level executives are driven by these ROI motivators:

  • reducing cost;
  • increasing revenue;
  • increasing customer satisfaction;
  • increasing market share; and
  • increasing wallet share.

Benchmarking allows top management to continuously compare and in that process answer the nagging question “Are we doing a good enough job?”

Are you aware of instances in which call center benchmarking results are used to motivate CSRs?

Yes. When call center managers benchmark their performance using our extensive database of best practices, it becomes very clear where the centers are doing well, and where they need improvement. Many call center managers find it motivating for the CSRs to “see” where they rank as compared to other centers like them. Also, since benchmarking is part of the journey to becoming a certified Purdue University Center of Excellence, when centers do become certified, there is a lot of celebration, which includes the agents, who feel proud to have achieved such a high level of best-of-breed performance.

How can benchmarks be used to provide such motivation?

Let me cite two of the most common ways: (1) Ranking tables are posted on bulletin boards that show how the center’s performance compares to a “peer group” of similar call centers, and (2) once a center is certified as a Center of Excellence, plaques and banners are hung throughout the center to remind all frontline agents that they helped make this high-quality performance possible.

Which benchmarks are most likely to be used and/or most effective to use for CSR motivation?

The most common benchmarks are those that CSRs can control by their own performance. These include:

  • caller satisfaction;
  • average handle time;
  • calls per agent per shift;
  • adherence to schedule;
  • attendance; and
  • occupancy.

Do such agent motivation attempts ever backfire?

Not really. I think Americans are natural score keepers, and have a tendency to want to compare. Questions like “How good are we? How do we compare to the competition?” and more are all part of doing a good job.

Are benchmarks more effectively used to motivate staff during training, or as a real-time motivator?

I think benchmarks can be effectively used for motivating both during training and during real time. In most all of our work and play endeavors, we strive to at least achieve the “normal” or average — and that’s what benchmarking is all about, namely, setting the bar at a reasonably achievable level for all CSRs.

Jeff Morris is contributing editor of O+F.

8 Building Blocks of CRM

  1. CRM Vision

    To increase customer and financial value

  2. Valued Customer Experience

    Focus on customer satisfaction; customer base potential; lifetime value

  3. CRM Strategies

    Offshoring; wireless, CTI technologies

  4. Organizational Collaboration

    From Technology Trigger to Trough of Disillusionment to Plateau of Productivity

  5. CRM Processes

    Integrated CRM and ERP

  6. CRM Information

    Creating a touchpoint value network

  7. CRM Technology

    Cross-functional process re-engineering

  8. CRM Metrics

    Abandon rate, first-call resolution, number of calls handled, QA, upsell/cross-sell

Counterpoint

Gary Lemke, publisher of RealMarket Today!, an online resource for the CRM and contact center businesses, says he’s “not a big fan” of benchmarking. “I just feel that often the benchmarking effort diverts people away from really understanding their business by focusing too much on what others are doing.”

In Lemke’s view, most firms’ interest in benchmarks isn’t necessarily driven by a desire to improve processes: “People use benchmark data for two reasons. One is to justify to management how well they are doing compared to others. It’s the ‘look how good we are’ presentation. The second is to justify to management additional investment in resources (people, technology, change in process, etc.) because they are doing poorly related to others. It’s the ‘look how terrible we are compared to others’ presentation.”

What’s missing from both, Lemke continues, is customer expectations. Although measurements are essential, comparing oneself to other organizations is dangerous, he says, because “there are big assumptions about it being ‘apples to apples.’ Look at it this way: You want to be better than the competition. Who are the last people that will benchmark with you? The competition.”
JM