How to Create the Most Effective Call Monitoring Program, Part 2

The first part of this article appeared in July issue of the O+F CONTACT CENTER ADVISOR, a sister e-newsletter published monthly. Part one focused on the call monitoring form. Here, we’ll discuss the monitoring process itself.

Let me start by stating the proverbial “I wish I had a nickel for each time someone asked me how many calls should be monitored per agent per month.” How many calls you should monitor really depends on the reason you’re monitoring:

Reason #1: short-term determination of the skill and knowledge needs for coaching. Here’s where the benchmark of 5-10 calls per agent per month comes in: By looking at this size sample of calls, you can determine trends in a person’s performance. By trending the areas in which an agent is struggling, you can develop action plans to help him improve.

Reason #2: long-term determination of performance for pay, promotion, or termination. Typically this is where contact center workers start talking about statistically significant samples and “how can you base decisions on such small sample size?” The fear of making the wrong decision based on the wrong information is a valid concern. But statistics help here.

First of all, there is no such thing as a “statistically significant sample of calls.” By its very definition “statistically significant sample” implies that there has been nothing to influence the agent between each call observation. In the contact center, that’s impossible. Any time a coach gives feedback or teaches the agent something, that agent has been influenced (albeit influenced positively).

So we must turn to another statistical axiom for help. In statistics we can define a sample size that will stabilize the outcome of the observations and give us confidence in the decisions we base on this information. We can determine a sample size that will decrease the variability or error to a very reasonable level.

The Law of Large Numbers says that numbers tend to stabilize as the sample size increases. The Rule of 30 comes from the transition from the t-statistic to the z-statistic. Basically, the variability of the sample is about the same as the variability of the population after n=30. Thus, by inference, sample sizes over 30 are good estimates of the population. Without going into a lot of additional statistical definitions, this means a good sample size for call observation turns out to be 30 or more observations.

Therefore, you should make no pay, promotion, or termination decisions without a sample size of at least 30 call observations. As stated before, it’s okay to make coaching decisions based on fewer than 30 observations, but you should not make career-enhancing or -limiting decisions on fewer than 30 observations.

As for when to monitor…have you ever had one of those days when nothing seemed to go right? Yes, me too—and so do your agents. This does not mean the agents have a license to provide shoddy service; it just means that it’s not fair to monitor all their calls on the same day. By monitoring on various days you increase the validity of your assessment and overcome the objection of “You just caught me on a bad day.”

Similarly, I don’t know about you, but right around 3 in the afternoon my energy heads south. If you were to monitor all my calls between 3 and 4, you probably wouldn’t get a very good view of my overall capabilities. Again, this is not an excuse for providing less than excellence. It’s just a fact of life. So by monitoring at various times of the day, you again will get a more accurate assessment and overcome the objection of “You just caught me at my worst time of day.”

Another common objection from agents is “The reason I got such a low performance review is because my supervisor doesn’t like me” or “Mary always gets high marks; she’s the supervisor’s favorite.” To overcome this hurdle we suggest that more than one person monitor each agent. If you are monitoring 10 calls a month, have one person monitor five calls and another person monitor the other five.

Another way to avoid the appearance of favoritism is to ensure consistency regarding how many calls are monitored per agent. I’m not talking about cases in which managers monitor employees right out of training more than they do veteran employees. I’m talking about the situations in which “I just ran out of time and didn’t have the opportunity to monitor five calls for everyone. Some of my people had five monitors, others had only three.”

Each agent must know that his final assessment is fair as compared with every other agent—especially when there is compensation tied to it! One of the ways to ensure this is to assess each agent using the same number of calls. This includes number of assessments per evaluation period.

Consistency needs to extend to calibration—the process that ensures that any two people can listen to the same call and score the quality of the call the same. The first step in calibration is to make sure that everyone in your center defines excellence the same.

For example, in the greeting, are your agents allowed to say “How can I help you?” rather than “How may I help you?” “How can I help you” is grammatically incorrect (“can” denotes ability, “may” denotes permission). Some contact centers say that the difference is so slight that “can” is permissible (they cannot imagine holding agents accountable for that). Other contact centers can’t imagine not holding agents accountable for all levels of grammar.

I use this illustration not to say that one way is right and one is wrong. The point is that contact centers have different definitions of excellence. And until your definition is clear, some of your monitors may score the call one way while others will score it a different way.

The next step in calibration is to move the team to a tolerable level of scoring variance. Again, your team needs to define this “tolerable level” must again be defined by your team. Some companies aim for a 4%-6% variance.

Calibration is accomplished through a grueling process of, as a team, listening to a call, scoring the call, and discussing the call with each other. It is through the discussion that the team becomes calibrated. In our experience it takes a team approximately 50-60 hours to calibrate. Then it takes approximately four to six hours a month to stay calibrated. That’s quite a commitment!

There’s one last step to ensure you have the best process possible—training. Ensure that you can support any skill deficiencies you have noted. It’s not okay to tell an agent that his voice quality is below par and add, “Oh, by the way, I don’t have any means to help you improve.” You might as well be saying to him, “Try harder, do better.” The help you provide can be internal (mentoring, coaching modules, role-playing) or external (public courses, seminars, classes at local schools).

After you’ve designed the call monitoring sheet (as per part one of this article), assess if you can support the development of each knowledge, skill, and attribute (KSA). If not, our suggestion is to put that KSA in a holding tank (take it off the form) until you can coach to it.

Kathryn Jackson is cofounder of Ocean City, NJ-based contact center consultancy Response Design Corp.