The traditional way to measure customer service metrics is to track averages, such as average reply time or average resolution time. But Geckoboard’s Customer Support Experience Report found that some companies measure first response time as a service level agreement (SLA) metric rather than an average.
SLA metrics are internal metrics used by customer support teams to set targets for the proportion of customers who will receive a predefined minimum level of service. For some core support metrics, customer service teams are better off tracking SLAs rather than averages because they better reflect what you really care about – providing high-quality service for your customers. to provide a better overall experience for their customers.
But which metrics should you choose, and how do you track them? In this post, we’ll look at how support teams can choose SLA metrics and use them to focus the whole team around delivering consistently high-quality customer service.
Choose a duration-focused customer support metric
There are several different types of customer support KPIs. Some focus on the quality of service, while others emphasize support volume, and others center on the duration of support interactions.
Duration-focused support KPIs are the best type to track as SLA metrics. Outlier data can skew averages for duration metrics, giving you a false impression of the quality of service your customers are receiving.
To illustrate this, here’s a small example data set of 10 response times:
The average (mean) response time for this data set is 6 hours 48 minutes, while the average (mean) response time for the first nine items on the list is 7 hours 25 minutes. The final item on the list skews the data so much that it reduces your average response time by 37 minutes.
If you had a target response time of 7 hours, you’d think your support team had achieved their target, though in reality 60% of your customers waited longer than that to get a response – with 20% of them waiting over 8 hours.
Another problem with tracking averages is that they can be artificially improved without actually improving your overall support experience. For example, if you responded to that last ticket in 1 hour rather than 1 hour 12 minutes, your average (mean) response time drops to 6 hours 46 minutes. But that improvement has come from speeding up responses to a ticket that was already being handled quickly – not by improving your overall ticket experience.
To combat these problems, some support teams may track the median. The median response time for this data set is 7 hours 21 minutes, with 50% of customers receiving a response within that time frame.
The median is less skewed by outliers, but it still hides a lot of detail. For example, if the two customers waiting over 8 hours for a response had to wait an extra 10 minutes each, the mean response time increases to 6 hours 50 minutes, but the median remains unchanged.
An SLA is a clearer indicator of whether customers are receiving poor service. You set a baseline for your overall level of service and provide that level of service for as many customers as possible. In the case of response time, this means you minimize the number of customers who wait an unreasonable amount of time.
Define the level of service you want to provide
Once you’ve chosen the support metric to track as an SLA, you need to look at the current level of service you’re providing and determine what level you want to reach.
Plot your existing data on a line chart or histogram to see the distribution and variation within your data set. Because outliers in your data can skew averages, you’ll have to look closely at your support performance to understand the current customer experience. For example, do most of your customers experience a similar response time, or is there a large variation in the data? Are some of your customers experiencing really slow response times that are affecting your CSAT?
From there, you can decide what you want to focus on. For example, is it more important for your support team to eliminate the slowest responses by reducing the number of customers waiting more than 8 hours for a response? Or is it more important to improve the overall experience for the majority of your base by increasing the proportion of customers who receive a response in under 7 hours?
Use your current performance data to set a baseline target for your response time. What percentage of your customers currently receive that level of service? Using the data set above, 40% of customers receive a response in less than 7 hours, and 50% receive a response in under 7 hours 30 minutes. Because half or less of your customers are receiving this service, these time frames would be unrealistic targets for your support team at this point.
However, 80% of customers received a response in less than 8 hours. This time frame is a more realistic target because the majority of customers already receive that service, but you still have plenty of room to improve.
Track the percentage of customers who receive that level of service
Determine the proportion of your customers who should receive your benchmark level of service. This target will help you provide a consistent experience for all your customers rather than artificially boosting your averages by speeding up already-fast responses.
You should aim to provide almost all of your customers with that level of service – 90% and above is a good target to aim for. Help your support team improve over time by setting different SLAs for the same metric – one as your baseline level of service, and one to aim for as exceptional customer service:
- Baseline level of service: Respond to 95% of customers in <8 hours
- Exceptional customer service: Respond to 50% of customers in <7 hours
Tracking multiple SLAs for the same support metric will help you better understand the level of customer service you’re providing and help you set targets to improve over time. Having a baseline SLA metric minimizes the number of customers receiving poor-quality customer service from you. It means your whole team is focused on hitting that target for all your customers rather than optimizing the support experience for a select few. And having a second tier of “exceptional” service gives your team a stretch goal to work toward: to delight customers with even speedier responses.
We recommend not targeting 100% of customers because there will always be factors outside of your control that hamper service, like an unexpected surge in support tickets. If you’re achieving close to 100%, you may be better setting a more ambitious response time instead.
Raise targets as you improve
If you’re consistently achieving your SLA targets, consider raising your benchmarks. For example, you might adjust the level of service your team should provide, the proportion of customers who receive this service, or both.
It may be tempting to significantly increase your SLA targets if your team’s performance is strong – for example, cutting your baseline response time target from <8 hours to <7 hours. However, we recommend starting with small adjustments so that these new targets feel achievable for your support team.
While you may want to get your response time down to <7 hours, rather than changing that target overnight, it's often useful to break it down and make smaller incremental improvements rather than targeting a big jump in one go.
When setting new targets, remember to take into account other factors that might affect your team’s workload and performance, such as seasonal demand, increased or decreased team availability, and new product releases.
Averages vs. SLA metrics: What is the best way to measure support performance?
While SLAs are great for making sure your service is consistent, averages can still be helpful for some support metrics. A KPI like Customer Satisfaction Score has a pre-defined range for responses, so it can’t be skewed as easily as duration-based metrics. So while you don’t want to track every support metric as an SLA, they’re a great fit for duration-focused KPIs and can help you deliver a higher quality of service to more customers than if you’re tracking those metrics as averages.