Chapter 7: Monitoring

Dec 01, 2003 10:30 PM  By

WHY MONITOR?

As you will see, most centers monitor, especially phone calls. And the reasons why they monitor, according to an ICMI/ACNielsen Study done in 2002, include:

  • 77% measure agent performance

  • 72% identify additional agent training needs

  • 54% evaluate level of customer satisfaction

  • 49% identify customer needs and expectations

  • 36% let agents “listen and learn”

  • 12% educate other departments about customers

While measuring an agent’s average talk time against benchmarks gives you a general sense of whether they are efficient, statistics alone aren’t sufficient indicators of performance. Why? Because they deal with people in quantities rather than as individuals. Basing evaluations solely on statistics doesn’t help you figure out what makes agents special in the way they communicate with customers. If performance were just a matter of measuring the average length of call, quality assurance would be an easy discipline. However, through QA you can establish standards that reinforce rather than dictate how agents communicate effectively with customers.

Monitoring is a great way to have your agents participate in the evaluation process, take ownership of what they consciously and unconsciously are doing (or not doing), and implement any needed corrections.

FEEDBACK

When giving feedback or coaching, be sure to acknowledge your agents first for all that they do right, so that they can actually hear what you say when you are constructively critiquing them. A good rule of thumb is the 3 × 3 model used by many coaches (Decker, 1993). Here feedback becomes three strengths followed by three weaknesses. This keeps it simple, assures balanced feedback, and we believe it results in more active listening on the part of the agent.

Another method of feedback is what is called the ‘sandwich’ method. You begin with details of what they do proficiently, followed by suggestions for improvement, followed by reinforcing the agent’s good points — thus called a sandwich.

To make a difference with your agents, be C.L.E.A.R.

Concise — be specific

Limit your points; don’t give over information

Easy to remember comments

Action — what you want them to do

Relevant — on purpose, not generalizations

As you can imagine, a monitoring program is only as good as the feedback provided to the agents. Not every employee is a good coach. If you are going to take the time to monitor call quality, also take the time to develop and implement a proper training program to be sure your people are equipped to provide constructive feedback, developmental guidance, and the communication skills for win/win situations.

CHALLENGES

A study done first quarter 2003 (Ascent Group, 2003) found the top four challenges to monitoring, as reflected in the bottom chart on page 59. As you can see, the most challenging hurdles to a quality program, according to respondents, included overcoming the resistance by the representatives, finding the time to monitor and provide feedback, calibrating accurately, and the lack of coach training.

Yet in answer to the question, “How do you measure the performance of your telephone agents?” 88% of respondents replied with: call quality monitoring scores (Anton, Rockwell, 2002).

Research reports a study that found significant improvements in call quality, customer satisfaction, employee performance, and overall call center performance as a result of monitoring. Companies reported 3% to 10% improvement in call quality; 5% in customer satisfaction and indirect benefits which included reduced turnover and absenteeism and improved morale (ICMI).

MULTICHANNEL

Companies that provide multichannel service are not always monitoring all channels. In a study conducted by the Ascent Group, 2003, all participants monitored phone calls, only 40% monitored e-mail, 26% data entry and keystrokes, 15% monitored letters, and 11% monitored faxes, as illustrated in the bottom chart on page 59.

In another study of other channel monitoring by ICMI/ACNielsen (ICMI, 2002), findings were:

  • 4 out of 10 centers monitor e-mail responses

  • 1 in 6 monitor fax responses

  • 1 in 14 monitor Web text-chat sessions

Four in ten centers monitor both voice and screen. There appears to be a strong relationship between the size of a center and whether they monitor voice and screen. As a center’s size increases, the likelihood that it will monitor both mediums also increases (ICMI, 2002).

BEST PRACTICES

Best practice as to how frequently an agent should be monitored is five times per month. However, the frequency truly depends on the agent’s experience. Certainly during the first thirty to ninety days of their employment, monitoring should be more frequently and after two years less frequently (Ask Dr. Jon, 2003). Incoming Calls Management Institute’s monitoring study found a wide variance in the number of calls monitored each month per agent. As you can see in the chart on page 59, the most popular are 4-5 or 10 or more. By the way, financial services monitor the highest number of calls, with more than one-third of centers monitoring 10 or more calls each month per agent (ICMI, 2002).

STRATEGIES FOR SUCCESS

Regardless of which channel a customer contacts you, strategies for success should include:

  • Respond quickly.

  • Handle requests through the customer’s choice of medium.

  • Be brief and be clear — reduce back and forth.

  • Personalize service.

Rosanne D’Ausilio, Ph.D., industrial psychologist and president of Human Technologies Global Inc., specializes in profitable call center operations, providing needs analyses, instructional design, and customized, live training across industries. Dr. D’Ausilio is a Certified Call Center Benchmarking Auditor through Purdue University’s Center for Customer-Driven Quality.

Contact information:
Rosanne D’Ausilio, Ph.D.
President, Human Technologies Global Inc.
3405 Morgan Drive
Carmel, NY 10512
(845) 228-6165; fax (775) 206-0290

www.human-technologies.com
rosanne@human-technologies.com

Jon Anton, Ph.D., is the director of benchmark research at Purdue University’s Center for Customer-Driven Quality. He specializes in enhancing customer service strategy through inbound call centers and e-business centers, using the latest in telecommunications (voice) and computer (digital) technology. Since 1995, Dr. Anton has been the principal investigator of the annual Purdue University call Center Benchmark Research. This data is now collected at the BenchMarkPortal.com Web site, where it is placed into a data warehouse that currently contains over ten million data points on call center and e-business center performance.

Contact information:
Jon Anton, Ph.D.
(765) 494-8357

www.benchmarkportal.com
DrJonAnton@BenchmarkPortal.com


The text of these excerpts has been reproduced verbatim from the original with stylistic changes only. It does not follow O+F’s standard editorial format.