So you have a Customer Satisfaction Score (CSAT) benchmark. How do you figure out if the benchmark you’re using is realistic? Benchmarking can be a very helpful exercise but, if done wrong, can hinder team performance by setting unrealistic goals or expectations.

Before you start benchmarking

Every Customer Support team is different so it's important to consider how you work with the following questions before delving into benchmarking.

  1. What  is your reporting period? Are you measuring your CSAT score for the month, the  past 6 months or so on?
  2. What is your ticket volume? Over the course of a day or a week, how many tickets do you receive?
  3. What size is your team? Is it set to grow or contract in the near future?
  4. How complex is your average request? Are these the types of queries that can be solved with FAQs or do they need further research?

These key areas will ultimately have an impact on your overall score and therefore your goals.

With the above in mind, we’re going to take a look at a number of ways to find a realistic CSAT benchmark for your Support team, highlighting the strengths and flaws of each option.

Internal benchmarking

Internal benchmarking uses data from within your organization. This usually involves either  looking back at historic data for your team, or drawing comparisons between similar teams or different regions operating in your company.

When determining your own internal benchmarks you’ll need to decide what timeframes and averages are most relevant. If your business experiences seasonal changes (common with B2C e.g. Christmas, Halloween etc) with spikes in customer contact, CSAT can be affected if you aren’t prepared. Comparing like-for-like against times of unexpected demand isn’t recommended, as it isn’t a true reflection of what your team can achieve. Therefore considering the appropriate timeframes for comparison is crucial for setting viable goals.

Advantages of internal benchmarks

  • Internal benchmarking is essential for setting your own internal targets. Looking at how you’re currently performing is very important when setting realistic goals for the future.
  • Historic data from within your company is likely to be far more reliable than external benchmarks - you’ll never find another business quite like your own. There are too many variables at play; company size, budget, industry, team structure, goals, stage of the business and so on.
  • Internal benchmarks are also really good if you have multiple teams where direct comparisons are relevant.

Disadvantages of internal benchmarks

  • They don’t inform you of how you’re performing compared to other companies in your industry. It might seem that you are performing well, but without wider context you could be falling short of what your customers demand or your competitors are delivering.
  • If you don’t have much historical data then it can be hard to work out an accurate benchmark.
  • For companies who are undergoing rapid or significant change, historical comparisons are less accurate.

External benchmarking

External benchmarks are gathered from sources outside your own organization. This could be as simple as speaking to similar companies to find out how they are performing or it could be using publicly available industry benchmarks.

Reaching out to your network to find out what similar organizations achieve is a great way of finding useful external benchmarks. It’s worth considering, the more similar the organization to yours, the more relevant the comparisons you make will be. So try to speak to teams of a comparable size, with similar products/services in the same industry. A good place to do this for CSAT is within the Support Driven community - where a huge network of Support folks hang out and share advice.

Publicly available industry benchmarks are normally collected through external research of a large number of organizations to find overall industry averages that companies can view their own performance against.

For Customer Support benchmarks, Zendesk does a good job of collecting data across a huge range of industries. Below are Zendesk’s benchmarks for CSAT, First Response Time (FRT), average call handling time, average request volumes per month and the average number of help center articles companies within each industry have. This information is based on average monthly data from their 486 million customers.




Avg. call handling time

Request volume per month

Help center articles



4 hrs

1.6 mins





5 hrs

2.3 mins



Entertainment & Gaming


10 hrs

1.4 mins



Financial Services


3 hrs

1.7 mins



Government & Not for Profits


3 hrs

1.7 mins





3 hrs

1.9 mins



IT & Consulting


2 hrs

1.7 mins



Marketing & Advertising


4 hrs

1.8 mins



Media & Telecommunications


2 hrs

1.8 mins



Professional & Business Support Services


3 hrs

1.9 mins





2 hrs

1.6 mins





10 hrs

1.5 mins



Social Media


11 hrs

1.5 mins





2 hrs

2.4 mins





3 hrs

2 mins





4 hrs

1.5 mins



Customer Support industry benchmarks from from Zendesk's Benchmarks

Advantages of external benchmarks

  • Regardless of the limitations associated with industry benchmarks they still provide an insight into how other teams like yours are performing. This gives an understanding of what your customers will be expecting when they interact with your support.
  • If you’re starting a Customer Support team from scratch or have no internal data to work with, it can be helpful to see what average CSAT scores are being seen across your industry to get a rough idea of what’s ‘normal’.

Disadvantages of external benchmarks

  • Benchmarks are most useful when comparing similar companies operating in similar markets with similar business models. Broad industry categories like “Technology” contain vastly different companies. Even narrower categories like “SaaS” aren’t much better - how comparable is a B2B enterprise product to a B2C entertainment product?
  • Industry benchmarks collected by specific organizations are normally only taking data from within their own platforms, such as Zendesk above. How would CSAT scores vary if you compared this data to Freshdesk, HelpScout or Intercom?
  • Every company has their own intricacies, organizations prioritize the customer experience in different ways based on strategy, even where they can be seen as relatively similar from the outside.
  • The way metrics are measured can vary vastly based on each business’ interpretation of them. In the case of CSAT, the way it is reported can vary based on KPIs and the tools used. Taking a benchmark literally could mean setting unachievable targets that would only demotivate your team or working to targets that don’t push agents to reach their full potential (because they’re too easily met).
  • A problem that occurs with any pool of data is that an average figure can be skewed by outlying data that falls outside of the pattern generally seen.
  • Without wider context, an industry benchmark is a snapshot view of a far more complicated situation. They don’t take into account your company’s situation such as age of the business, number of agents, processes, individual products/services, seasonal fluctuations and more, they simply provide an average.

Considerations when using benchmarks to set your CSAT goals

Benchmarks provide a helpful pointer as to how things are going or an idea of how they could be going, but there are numerous things that can impact how you fare in reality. It’s important to consider other factors that could change the way you set your CSAT goal.

If your team is continually improving the average CSAT score then perhaps you could consider setting the goal to continue the incremental improvement. Rather than working towards a specific number, everyone pushes to beat their previous best e.g. aim to improve CSAT by two percentage points every month, as opposed to aiming for 90%. On the flip side of this, if your CSAT score is in continual decline, it could be more realistic to set the goal of stopping the decline before thinking about improving it.

More often than not, CSAT can unveil the need for training. If you’re seeing a dip in scores, it might be that the team needs training on new product features, processes or could need some help with soft skills (e.g learning how to tactfully say no). Klaus is a great tool to help understand where issues lie and support those in need of some additional training.

However, there are many things outside of the control of the Support team that can have a considerable impact on your CSAT score. If you’ve noticed any patterns regarding any particular issues e.g. a specific bug with your product or service that’s causing lots of unhappy customers, it might be worth considering what your score would look like if the issue no longer existed. This may be having a negative impact on your score and demotivating team members who are unable to achieve their goal because of something they can’t change.

If you’ve seen your metrics reflect issues that can only be solved with more resources, you could use your data to demonstrate that extra help is the only way to shift your metrics. For example, you may have seen a direct correlation between FRT and CSAT. In this case, you may think that in reducing the amount of time agents spend responding to common queries by introducing a chatbot you can improve CSAT by 5%. This highlights how your goal could realistically be shifted with increased resources.

Finally, another more controversial consideration is when higher CSAT scores are being achieved because of over delivery. By this we mean, when complaints lead to a response from the Support team that is not necessarily sustainable. Let’s use the example of a shipping delay, say the customer receives free next-day delivery as a response to the issue. It works and the customer leaves a great score but in the long-run, it’s not sustainable for the business to offer this in every case. Moreover, the customer could come to expect this and when they don’t receive the same response it could result in a negative score.

Checklist for setting a realistic CSAT score

Review your internal data

  • Check historic CSAT scores
  • Look at churn rate (subscription products) - do you notice any correlation with your CSAT scores?
  • Use review platforms to get a feel for customer sentiment elsewhere (e.g. Trustpilot, Google, G2Crowd)

Look at external benchmark reports to get a rough idea of what ‘normal’ is

  • Compare your existing performance to external benchmarks
  • Determine how comparable external data is based on your company. Is it calculated in the same way? Are there vast differences in the companies being measured?
  • Get insights from your network, talk to others in similar companies

Use your judgement to set appropriate goals

  • If your CSAT has been improving over time then you may just want to maintain the existing score, continue growth or increase the rate
  • If your CSAT has been  in decline you may want to stabilize or return to improving
  • Review the direct causes for your negative CSAT scores. This may help identify projects to methodically improve your CSAT - could you set goals based on that?
  • If you see a trend of regular negative CSAT scores attached to a particular issue that the Support team cannot change, raise this with the rest of the company. Highlight that your benchmark is lowered because of outlying negative scores related to this issue.  

Already nailing CSAT, what next?

If your team is already achieving a very high CSAT score then you may find that it’s no longer realistic to set a goal to go higher. This is something we hear from a lot of our customers who realise that there will always be negative scores that you can’t completely eliminate. Geckoboard customers who regularly score over 90% CSAT often shift to focus on other goals besides increasing their CSAT score, such as maintaining a high level of service, increasing response rates to satisfaction surveys, or focusing on other complementary metrics such as Net Promoter Score (NPS) or Customer Effort Score (CES).

Sign up for a free trial of Geckoboard to highlight your most important real-time Customer Support metrics on a clear, easy to understand dashboard.