Contact centers are data-rich environments, with detailed information available from a multitude of systems on every minute step of each interaction’s progress. In fact, the vast quantity of available data is one of the industry’s key challenges, since it is often quite difficult to link the various data elements together to get a coherent picture of how well centers are being managed.
In recent years, much interest has focused on the adoption of business intelligence (BI) tools to improve the management of contact centers, and provide deeper insight into how these complex systems work (or sometimes don’t work!). But it’s important to realize that BI tools are only as useful as the level of expertise in the domain-specific metrics that are available. That is, unless one is measuring – and analyzing – the most relevant metrics, BI tools will yield little useful insight.
BI work often evolves into a data integration exercise, with the unfortunate result that business users often develop requirements that technology providers then duly satisfy, even though the requirements are often based on erroneous understanding of what some of the data means. It is a chicken-and-egg situation – those that deal with data on a daily basis don’t know what should be measured, and those who write the requirements usually write them in terms of already-understood ideas, such as queuing theory.
As an example of this phenomenon, consider the well-known service level (SL) metric. This is usually measured as the percentage of inbound calls that are answered within a specified time (a common SL target is 80:20, which means 80% of calls answered within 20 seconds). This is an easily measured metric, and many treat it as a proxy for overall service quality. The first objection to this metric is that it is a poor proxy for service quality.
While excessive delays clearly can annoy consumers and lead to calls being abandoned and to the perception that service quality is poor, answering calls instantly does not guarantee a positive service experience. For example, most people would prefer to wait a moderate time for the right person, rather than being answered instantly by an unskilled person from a foreign culture.
More damning than its poor performance as a quality proxy, though, is the fact that using service level as a key operational metric can lead to suboptimal and costly behaviors. The point of measuring business processes (or any processes, for that matter) is to understand how they work and to detect when they are working poorly so that corrective action can take place. Therefore, to assess the utility of a metric, it’s important to examine what behaviors are caused when a metric deviates from its expected value. The case against service level as a key metric is particularly strong when looked at this way.
Another metric emerging as a replacement to service level is first call resolution (FCR), which is usually defined as the percentage of incoming calls that are handled to completion by the first agent who received the call. While this seems sensible (no one likes to be bounced around from person to person–it has even more problems than service level.
To begin with, it is by no means clear that FCR is even the right goal for every business. In some cases, it would be more cost effective to have a staff of first-level agents who handle routine inquiries and who also perform triage on the more complex ones, making sure they go to the right second-level agent. This is common in technical support and help desk applications.
Also, it may be desirable for a call to be skillfully moved from one agent to another to better handle compound inquiries. And finally, if managed well, it is often desirable to take care of the initial part of an interaction and then proactively contact the customer after a short period of offline research to complete the task. In these cases, it’s essential that the customer is not made to call back out of frustration for lack of corporate follow-through. Besides, even in those cases where FCR is a valid goal, it is always difficult to measure, and easy to undermine, as a metric.
There are no easy answers, because each contact center is different, and all are complex. But the situation is far from hopeless. The data is available and is amenable to analysis, and the opportunities for improved financial returns provide a powerful incentive.
When contemplating technological change, start with confidence-building steps that develop skill with the technology and that deliver immediate, predictable returns with low risk. Be sure to get a solid, cradle-to-grave data collection utility in place so that each interaction is captured and can be analyzed. Keep in mind, you will seldom need to analyze individual interactions, but sometimes when the statistics surprise you, it is helpful to look at a call’s progress to understand what is happening.
Then, in parallel, begin studying your data. Hire or develop someone with curiosity and good statistical and troubleshooting skills to “live with the data.” Over time, you will be able to make decisions that are based on hard data and sound analysis, and you should be able to achieve significant improvements in productivity each year!
Brian Galvin is vice president, product management for Daly City, CA-based Genesys Telecommunications Laboratories.