How to Create the Most Effective Call Monitoring Program, Part 1

How many calls do you monitor per agent per month? Have you heard the statistic that “world class” contact centers monitor 5-10 calls a month? Have you ever had an agent tell you that it is not fair to base her evaluation on such a small percentage of monitored calls? Are you confused by how to define a statistically significant sample? Are you concerned about the fairness and accuracy of your monitoring program? Are you asking how you can design a call-monitoring program that allows you to make the short- and long-term decisions required to ensure happy employees and a quality organization?

If so, then this two-part article is for you! Here, in part one, we’ll discuss best practices for developing the call monitoring form. Later, in part two, we’ll detail the monitoring process itself.

Developing the call monitoring form

The foundation of any monitoring program is the call monitoring form, and we have a few suggestions for you as you design this important document.

1) Make the form easily updateable.
I’m sure you are well aware that nothing ever stays the same in the contact center (that’s an understatement if I’ve ever heard one). Customers’ needs change, and the products and services of the company change. You get smarter about how to deliver excellence. Each one of these changes means that the call monitoring form must be updated.

We suggest that you put a process in place to update the form every quarter. The process of modifying the form is quite involved and usually cannot be done any more frequently. It is important to have one person act as the conduit for all the change requests. Remember that the requests can come from anyone—agents or supervisors. This “conduit” funnels the requests to a change team that evaluates each request and decides if it is in line with the intention of the form. If the changes are not adopted, the conduit is responsible for telling the person who made the suggestion why they were not. If the changes are adopted, the form is updated, the change is communicated, and the agents are trained on the new quality requirement.

2) Make the form specific. List items down to the skill and knowledge level. Determine the reason you have a call monitoring form in the first place. If the reason is to assess the performance of the agent during a call and provide coaching when the agent struggles, then it is imperative to design the form so that you can watch the trends of performance all the way down to the skill and knowledge level.

Let’s say that you have a category called “listening skills” on your form. After monitoring several calls, you discover that you marked that the agent was struggling with listening on each call. But in retrospect how do you know what to coach on? Was the person interrupting the customer on each call? Or was he struggling with knowing how to paraphrase the customer’s request?

“Listening” is a broad topic that can be broken down into skill and knowledge components. If you had a line item that referenced “interrupting” and another line item that referenced “paraphrasing,” then you could trend a skill deficiency and easily design coaching sessions based on these trends.

3) Develop the form based on the competencies required by the job. The call monitoring form is developed based on the job role, the competencies (the predictors of success), and the associated KSAs (knowledge, skills, and attributes).

4) Develop a definition list that accompanies the monitoring form. We commonly call such a document a nuance list. Once the competencies and the KSAs are listed on the monitoring form, there is not much physical room left to define the nuances of the skill for your environment. For example, the form may refer to “proper company greeting.” But what does that mean? What is the proper company greeting? How does everyone know what you expect here?

The definition list is used to define the nuances of the KSAs for your environment. Under “proper company greeting,” I might put something that is scripted (e.g., “Thank you for calling Response Design. This is Kay. How may I help you?”). Or I may have a list of items the person must say (e.g.,” identifies company, identifies self, asks how he can help the customer”).

5) Make space to record comments. We have a rule at Response Design: You cannot mark that a person is struggling with a skill unless you also write a comment about what you heard on the call that demonstrated that struggle.

Imagine that someone came to you and told you that you were struggling with interrupting the customer. What’s the first question you might ask? I know the first question I always ask is, “Can you give me an example?”

Your comment on the call monitoring form should be written so that the example is solid. It should contain the point in the call where you heard it (“Remember when the customer was giving you his account number?”), what you heard (“You didn’t wait for him to finish. By that time you had pulled up his account and were ready to move on. You talked over him by asking the next question”), and the possible downsides to the situation if it were to continue (“The customer might perceive that you were trying to hurry her off the phone. That could result in a dissatisfied customer.”).

6) Think through the implication of the various ways to score a call. There is no one right way to score a call. There are multiple options. You might score a call with what we call an “on/off switch.” The “on” switch means the skill was demonstrated. The “off” switch means the skill was not demonstrated. You can also score using a scale (1-5, poor to excellent, etc.). Each way of scoring has its own unique upside and downside.

The on/off switch means that as you listen to a call, you are listening for a consistent demonstration of the skill throughout the call. “On” means that there was a consistent demonstration of the skill throughout the entire call, and the person would not benefit from any coaching. “Off” means that there was an inconsistent demonstration of the skill throughout the call, and the person would benefit from coaching.

Many people struggle with the on/off switch because it doesn’t seem fair to lose all the points for a skill if the agent did it right some of the time. That’s why many people prefer to score using the scale. At least with a scale you can give partial credit.

The upside to the on/off switch is that the consistency of scoring from monitor to monitor and call to call is usually much higher. It is much easier for a monitor to make a determination between “consistency in skill demonstration” and “inconsistency” than it is between the gray zones of a 1, 2, 3, 4, or 5 demonstration of a skill.

7) Think about the messages that the words are conveying.
What do you call your call monitoring form? What do you call the scoring options? Believe it or not, this wording is critical to conveying a message about your contact center culture and your management team’s perspective on assessment.

Be creative. Think about what you want to convey about the process. What is the difference in the message if the form is called the “Customer WOW Form” vs. the “Call Assessment Form”? How about if instead of labeling the on/off switch “yes” and “no,” you label them “needs coaching” and “demonstrated excellence?”

8)Weigh the components.
Certain competencies (along with their related KSAs) contribute more or less to the quality of the interaction with the customer. These differences should be reflected in the point value assigned to each component. When you compare the closing of the call to listening skills, which seems to contribute more to the quality of the call? However you answered, that competency should have a higher weighting.

Kathryn Jackson is cofounder of Ocean City, NJ-based contact center consultancy Response Design Corp.