You’ve spent hours thinking about the perfect set of metrics you want your team to focus on improving. The metrics are displayed on a beautiful large screen dashboard in the office and they’re perfectly visualized. But then, the tumbleweed. Nothing’s happening. The metrics aren’t moving and your team start ignoring them.

We know this feeling because we’ve experienced it. It sucks. We want to help you avoid this sorrow. That’s why we’ve compiled these common reasons why metrics don’t work and how to fix them.

The metrics don’t align with the work being done

First, always check that the work you’re doing is work that will improve the metrics you’re tracking.

For example, let’s say you’re focused on improving conversion rates of visitors. If you’re tracking growth in volume of unique visitors to your website, then you won’t significantly move the unique visitors number. A disconnect between work done and metrics makes the metrics pointless.

Solution: To align work executed and metrics tracked, start with a goal-setting framework such as Objectives and Key Results (OKRs) or V2MOM. They help connect the measurable outcomes you want to achieve with the work that needs to be done to get there. Frameworks can be useful, but sometimes a simple sanity check is enough.

The metric fluctuates

When metrics are constantly fluctuating, it’s hard to spot a trend at all. Metrics can vary significantly for a couple of reasons.

  1. The numbers are too small. For example, when you’re just launching a new website to the market, you’ll likely have very low volumes of traffic. If you track website traffic growth on a daily basis, you’re likely to have lots of ups-and-downs so seeing any sort of trends in daily numbers will be very hard.
  2. Outside factors are impacting them. Let’s say you’re working on optimizing the shopping cart experience for an ecommerce site. You’re tracking volume of sales and average order value. However, the time of year and types of visitors will probably massively impact those two metrics and cause them to fluctuate regularly, making it hard to understand the impact you’re having.



    Solution: When metrics are volatile, if you can, it’s best to use larger time spans and/or rolling averages. For example, with the new website above you might want to look at website visitors in the last 7 or 28 days compared to the previous period to get a better indication of if you’re growing traffic over time.

In the shopping cart optimization scenario above, rates may be better than volumes. Trying to reduce the shopping cart abandonment rate will eliminate most of the external factors impacting the metric you’re tracking.

You’re tracking lagging metrics

Everyone wants to see the impact of what they’re working on quickly. Consequently, if you’re trying to motivate your team with metrics in the short-term, it’s best to avoid using metrics that are too laggy.

For example, if a product and engineering team are working on adding a feature that they believe will increase paying subscribers for their product, then focusing on the paying subscriber metric in the short-term may not be the most effective. On a daily or weekly basis when they’re building and iterating on the feature, the number of paying subscribers will not be moving as a result of what they’re doing. Even after they launch a feature, it might take a long time to impact subscribers so may still not be worth tracking.

Solution: Use leading indicators or proxy metrics that give you an earlier indication on progress towards the ultimate goal. In the example above, you might track a feature adoption metric when it’s in beta to help you work on improving the feature. Once all the work is complete you’ll want to measure paid subscriber growth where the feature has been used, but only then.

Improving the metric can have a negative impact elsewhere

Metrics are often gamed. This means that by improving one metric, you do so to the detriment of your team or business.

Say you wanted your Customer Support team to reduce first response time to customers. Your hypothesis is that this will create a better customer experience. However, in order to improve this metric, your Customer Support team begin responding to customers with irrelevant responses and inane questions to reduce the first response time. At the same time, your customers become increasingly frustrated with your Customer Support team’s responses.

Solution: This is where it’s important to have health metrics alongside the metrics you’re trying to improve. For example, in the scenario above, if you had a health metric of maintaining the same Customer Satisfaction (CSAT) score then first response time would improve in a way that still maintained response quality.

The metric is impacted by multiple efforts

Often times, the same metric can be impacted by the work of multiple teams. This makes it hard to discern what work is impacting changes in the metric.

For example, an ecommerce company wants to grow sales. One team works to improve user experience to grow sales. Meanwhile, another team is working on projects to attract more people to the ecommerce site to grow sales. A few scenarios could play out here:

  • Sales increase, and both teams claim credit for the increase. But did both projects actually move the number?
  • Sales stay the same, so both assume their efforts had no impact. But did the negative impact of one project cancel out the positive impact of the other?
  • Sales go down, so both teams assume they did badly. But did one project have a massively negative impact and the other have a slightly positive impact?

Solution: Having different teams focused on improving the same metric is great for business alignment. However, in addition, it’s worth having narrower metrics to discern the impact of individual efforts on that overall metric. The team focused on user experience improvements could track conversion rate from visitor to customer to understand their impact. Meanwhile, the other team could track unique visitor volumes to understand their specific impact on sales.

The metric is too complicated to understand

Often, because you need to narrow a metric to give it focus, it becomes very complicated with lots of caveats. Caveats make it hard to explain, and therefore difficult a for your team understand it. This leads to them focusing on the wrong areas or drawing incorrect conclusions.

For example, to measure the impact of a new feature you launch you might end up with a metric that has lot’s of caveats like:

“Volume of marketing users who sign-up using integration X who complete actions X, Y and Z in their first 3 days of using the product.”

This is a lot to take in and understand, which could easily create confusion.

Solution: With metrics like this, it’s often easier to rebrand them with easily understandable names. Taking the example above, you may simply call it “New user feature completion”. That way, everyone can easily understand what the metric is telling them. However, it’s normally worth capturing caveats of the metric somewhere for those who need to know them.

It still takes time, iteration FTW

With all these lessons, it’s important to remember that reaching the perfect set of metrics takes time. We always see customers who are most successful with metrics go through a process of iteration. They constantly adapt and refine the metrics they track from a broad starting point. So use this advice to help look out for things when iterating and refining the metrics you track.