Houston, we have a problem.

Houston, we have a problem

You’re in a hurry. You need answers, NOW.

As the head of customer support, you notice a significant increase in ticket response time and you need to know what’s causing it.

Perhaps you’re the marketing manager for a SaaS company and see that signup numbers have dipped. The CEO and VP of product want to know the cause.

Maybe your first task of the day is figuring out why the average cart abandonment rate for your ecommerce business is increasing.

Or suppose the activation rate for your mobile app has decreased and you’re responsible for figuring out why.

Whatever the problem, figuring out what caused it and how to fix it is now your top priority. You already know data can help solve the problem, but you don’t have the time or expertise for a massively complex data investigation.

Good news! You don’t have to be a statistician or have unlimited time to solve your most pressing business problems using data. This no nonsense data analysis guide will help you confidently draw conclusions and make smart, data-backed decisions.

Leaders need to focus on intelligently sifting through the massive amounts of available information to retrieve knowledge that is actionable, and to use effective processes and tools to make smart decisions.

Ronald van Loon, data scientist, speaker, author, and founder

What’s going on? No, really.

What's going on? No, really.

Before diving into any kind of data analysis, you should quickly validate the problem you’ve identified.

The single most critical principal I apply when analyzing data is a rule my high school math professor taught me at age 14: ‘Don’t write the first line of code until you can describe in plain English the problem you are attempting to solve!’ Simply put, if you can’t explain in plain english the business problem you are setting out to address, no amount of data analytics is ever going to solve it.

Dez Blanchfield, investor and data scientist.

Could this issue be a symptom of a bigger problem? For example, is the dip in signup numbers an indication of a website glitch? Could the increase in ticket response rate actually be an indication of a deeper staffing problem? Look back over a wider period of time. Is this really an outlier? Answering these questions is particularly important if someone else has reported the problem.

On the flip side, is this issue a freak instance such as a reporting error (i.e. selecting the wrong date or a bug in the reporting software)? Have other related metrics dropped off similarly? If you notice downloads have fallen off a cliff but activations haven’t, perhaps downloads aren’t being captured properly? If a metric is counted in multiple systems (e.g. Google Analytics and your own event tracking), do both systems show the same drop off?

Make sure you’re looking at a metric that matters. Rates are a great example of this. You might notice the website conversion rate has dropped but if the raw number of signups hasn’t fallen, then this goes from an emergency to a mystery to uncover!

This quick, preliminary assessment answers two questions: is this actually a problem? And if yes, what’s the core problem here?

Think of this as the data analysis version of ‘a quick web search’ to confirm that yes, this is a problem worth looking into further.

Why might this be happening?

Why might this be happening?

Now that you’ve verified the problem, it’s time to tackle the cause.

Look for quick wins

Similar to the standard tech advice to ‘turn your device off and back on,’ look for any obvious possible causes or answers to the problem. Have you double checked the source or report showing the problem? Are there any abnormal or one-off causes that immediately come to mind?

For example, perhaps the SSL certification (a type of encryption that secures customer data) for your ecommerce site has expired, resulting in a browser popup warning potential customers that their data is not secure and thus significantly increasing your cart abandonment rate.

If you’re able to identify a cause with a quick fix, awesome. Problem solved!

If not…

Ask around

Does this problem impact or involve other teams? If so, do they have any insight into possible cause(s)?

Even if there’s no obvious link between the problem and other teams, it might be worth a quick ask. For example, a marketing manager might ask customer support, “I’ve noticed a dip in signups. Can you think of any changes you’ve implemented around the last couple weeks that could be related?”

Use any insights you gather here as you move on to step three.

Create hypotheses

A hypothesis is simply an educated guess that hasn’t been confirmed yet. Think of it as a possible explanation for the problem that needs to be tested.

A fact is a simple statement that everyone believes. It is innocent, unless found guilty. A hypothesis is a novel suggestion that no one wants to believe. It is guilty, until found effective.

Edward Teller, Hungarian-American theoretical physicist

This step doesn’t need to be daunting or laborious. It can be quite informal depending on the problem you’re trying to solve. The important part is to think of several assumptions about the cause of the problem and then jot down how you might prove/disprove each one.

Example Hypotheses

Going back to the scenarios mentioned at the beginning, here are some example hypotheses.

  • Hypotheses for a customer support problem: Ticket response time has increased because of…

    • an influx of service-related tickets which have longer response times compared to product-related tickets.
    • challenges specific to one call center.
    • insufficient customer support team members which results in a backlog of tickets.
    • a recently launched product feature.
  • Hypotheses for a marketing problem: Signups have decreased because of…

    • public holidays in some regions.
    • recent changes to the marketing site (or site outages).
    • a website outage Monday morning causing an error during the signup process.
    • a decrease in conversion rate has decreased signups.
    • organic search rankings (for our product page) that dropped to the second page of search results.
  • Hypotheses for ecommerce problem: The average cart abandonment rate has increased because of…

    • an increase in the absolute number of people beginning a cart (placing items in the cart).
    • a recent change to part of the checkout process.
    • seasonality (i.e. holidays, school breaks, etc.).
    • an end to a promotion which results in more people abandoning their carts.
    • a specific product.
  • Hypotheses for mobile app problem: The activation rate has decreased because…

    • something has changed in the product causing the overall activation rate to drop.
    • a new (and different) group of people have begun trying the product.

It’s important to articulate several possible causes for the problem before analyzing your data. This helps prevent common data mistakes such as data dredging or cherry picking (we’ll discuss these in more detail later).

However, sometimes looking at the data can give rise to a new hypothesis which you would then want to test.

If you want to take a more formal approach to formulating your hypotheses, here are the technical aspects of a good hypothesis:

  • it involves an independent variable and a dependent variable,

  • it’s testable, and

  • it’s falsifiable.

The independent variable is the cause (the aspect can be changed or controlled) and the dependent variable is the effect (the testable outcome).

Falsifiable simply means the hypothesis can be proven wrong. A helpful way of ensuring you have a falsifiable hypothesis is to drop your variables in this question: “If [independent variable/cause] occurs, will [dependent variable/effect] be true or false?”

Whether formal or informal, our hypotheses will then be proved or disproved using data in the next step. Our analysis will begin with the hypothesis we think is most likely and then will continue with the rest until we find the cause.

There’s two possible outcomes: if the result confirms the hypothesis, then you’ve made a discovery. If the result is contrary to the hypothesis, then you’ve made a discovery.

Enrico Fermi, Italian-American physicist and the creator of the world’s first nuclear reactor

What does the data say? (Analyzing your data)

What does the data say?

Armed with several possible causes, we can now take a look at our data. Here’s a simple data analysis process to test your hypotheses.

  1. Determine and segment relevant data. Based on your hypotheses, what data do you need to look at? What metrics will help you prove or disprove the possible causes? By isolating different pieces of data that may be related to or causing the problem, you’ll be able to more easily spot trends or anomalies (step 2).

    For example, you might segment the number of signups by country, channel, and web session duration to test your hypotheses (solving the marketing problem mentioned earlier).

  2. Eyeball your data. If you’ve been in your role for more than a couple months, you likely understand what ‘normal’ metrics look like for your team. Based on that knowledge and using common sense, what do you notice? Is there any aspect of the data that appears abnormal? Perhaps you see trial signups have a clear 20% drop off in Australia.

    If you haven’t yet established a baseline for ‘normal,’ use historical data as a starting point. For example, you might compare the signups this month to the signups in the same month last year. Or perhaps look at the trend of signups over the last 12 months for context.

  3. Assess the impact of an anomaly or trend (spotted in step 2). This is a sanity check to see if the trend/anomaly you’ve spotted is significant enough to explain the underlying problem. The technical term for this is ‘practical significance.’

    Continuing with our example of trial signups, some basic math helps us see that Australia only makes up for 5% of all signups. Since we’re trying to understand why there’s been a 10% overall drop in signups, it becomes clear that this dip in Australian signups isn’t the main cause.

Think of these steps as a fluid process - eyeballing your data may lead you to additional insights (step 1) and sometimes the cause might be so obvious that a sanity check (step 3) isn’t necessary. Other times, assessing your data (step 1) might lead to a new hypothesis.

One finds the truth by making a hypothesis and comparing observations with the hypothesis.

David Douglass, American Physicist

Let’s get nerdy for a minute.

You may have heard the question, “Are the results statistically significant?”

Statistical significance (or statistical significance testing) is a technical term for determining if the anomaly you notice is due to a sampling error or is, in fact, a consistent finding across all the data. (If you want to learn how to test your hypothesis for statistical significance, read more here and here.)

The meaning of ‘significance’ here can be confusing. In statistics, significance means the result (or anomaly) is accurate and verifiable (instead of a chance occurrence). Unlike the commonly known definition of significance, this term doesn’t imply ‘importance’ in the scientific world. So it’s possible to have statistically significant findings (accurate findings) that make no real difference in solving your problem - like the 20% drop in signups in Australia.

We often worry about whether our sample size is large enough to provide reliable results when we find statistical significance, but we also need to take into consideration whether these differences are meaningful in a real way.

Jennifer Shin, Founder at 8 Path Solutions and Faculty at UC Berkeley

Okay, great. Why does this matter?

Solving everyday business problems means we’re looking for anomalies or trends in our data that not only are statistically significant, but also practically significant.

In other words, we need to figure out what is making (and will make) a real impact on our signups, ticket response time, cart abandonment rate, and activation rate. To better understand this, let’s walk through an analysis of our four opening example problems.

Our ability to do great things with data will make a real difference in every aspect of our lives.

Jennifer Pahlka, Founder and Executive Director for Code for America

Data analysis in practice (examples)

Data analysis in practice

I think of data science as more like a practice than a job. Think of the scientific method, where you have to have a problem statement, generate a hypothesis, collect data, analyze data and then communicate the results and take action…. If you use the scientific method as a way to approach data-intensive projects, you’re more apt to be successful with your outcome.

Bob Hayes, PhD, President of Business Over Broadway.

Customer support/success example

The head of customer support, Jamie, has noticed response times for support tickets have increased and are too long. She needs to know what’s causing it and begins by jotting down the problem and brainstorming several hypotheses.

  • Core problem: Our customer support ticket response time is too long. How can we reduce response time?

  • Hypotheses: Ticket response time has increased because of…

    • an influx of service-related tickets which have longer response times compared to product-related tickets.
    • challenges specific to one call center.
    • insufficient customer support team members which results in a backlog of tickets.
    • a recently launched product feature.

After going through the initial data analysis steps, Jamie ended up with a list of potential causes (the hypotheses listed above). Now it’s time to analyze them.

First, she explores whether there has been a recent shift in the service tickets compared to product tickets. Did it happen at the same time? Do those tickets take longer? Does the magnitude of that impact match the core problem (i.e. could this realistically be the cause)?

Here’s what she found in the data related to the type of support ticket.

  • Product-related tickets have a response time of 5.5 hours

  • Service-related tickets have a response time of 5.8 hours

  • The ratio of product-related tickets to service-related tickets has remained the same over the last month at roughly 60% and 40% of tickets respectively.

The longer response times for service-related tickets are statistically significant (accurate difference) compared to the response time for product-related tickets because of the extremely large sample size (5,000 total tickets). However, this doesn’t practically impact our problem because the absolute difference of response time is so small (0.3 hours).

Jamie decides to explore a speculative possible cause. Perhaps the increased response time is related to a specific geographic region.

The company has 4 call centers, each in different time zones (Pacific, Mountain, Central and Eastern). After the data was split into the four time zones, she found the average response times for each was:

  • Pacific: 5.1 hours

  • Mountain: 4.6 hours

  • Central: 15.5 hours

  • Eastern: 4.2 hours.

By eyeballing the data, Jamie notices the Central time zone has a response time around 3x longer than the other time zones. This seems like it might be the primary cause and chats with the manager at that call center.

She discovers the team is struggling to roll out their new support software because of a technical snag in connecting their phones to the software. This caused a backlog of support tickets and significantly increased response time.

With this new insight, Jamie can focus on resolving the underlying challenge and help her team get back on track.

Marketing at SaaS company example

Liam, the marketing manager for a SaaS company sees that signup numbers have dipped. The CEO and VP of product have also noticed and are keen to understand the cause. Liam starts his data analysis with the following overview.

  • Core problem: The number of trial signups for our product decreased this week. What is causing this dip?

  • Hypotheses: Signups have decreased because of…

    • public holidays in some regions.
    • recent changes to the marketing site (or site outages).
    • a website outage Monday morning causing an error during the signup process.
    • a decrease in conversion rate has decreased signups.
    • organic search rankings (for our product page) that dropped to the second page of search results.

First, Liam is eager to know - how much does the data vary? And is this definitely out of the ordinary? Eyeballing a chart of signups over time confirms it is unusual and needs to be addressed.

He begins looking at the data by day or hour. Was this a dip that has since recovered or are daily signups still lower? If it were an outage that’s been resolved, he would expect to see a dip in signups at the same time which has since recovered. However, he rules this out as the dip started before the outage and hasn’t recovered.

Some minor styling changes were made to the signup form, but again the timings don’t quite line up. He also double checks the conversion rate for this page and it’s actually gone up slightly. No dice.

He does notice the number of people arriving on the sign up page has decreased by around 10% and the timing seems to correlate with the overall dip. Based on the data he’s reviewed so far, he assumes the problem is upstream.

Next, Liam considers the holiday hypothesis. It was a public holiday in some regions last week. Breaking down the signups by country over time, he sees the dip isn’t for just one country - it’s across the board. So he rules out this hypothesis too.

At this point, he’s a bit lost. He decides to investigate whether some pay-per-click (PPC) landing page changes could be to blame. Ah, one of the campaigns is down 50%. Could that be it? No, since it only accounts for 1% of signups.

Finally, he looks at signups by channel. Organic search (which accounts for 70% of signups) is down 20%. This looks interesting. Could it be a change in page ranking kicking in from on-page changes made a few weeks ago?

Time to check the SEO tool. Bingo. The main keyword has dropped ranking and is now on the second page. This looks very suspicious. A sanity check of the numbers shows this more than accounts for the drop off.

In addition to having an explanation for the drop in signups, Liam can create a strategy for recovering signups to present to the CEO and VP of Product.

Ecommerce example

Suppose an ecommerce manager, Simone, needs to figure out why the average cart abandonment rate for her ecommerce business is increasing. She dives into the relevant data with several hypotheses.

  • Core problem: More potential customers are abandoning their online shopping carts. How can we decrease the average abandonment rate?

  • Hypotheses: The average cart abandonment rate has increased because of…

    • an increase in the absolute number of people beginning a cart (placing items in the cart).
    • a recent change to part of the checkout process.
    • seasonality (i.e. holidays, school breaks, etc.).
    • an end to a promotion which results in more people abandoning their carts.
    • a specific product.

First, Simone checks to see if the absolute numbers have changed. Has the rate gone up because more people are starting a cart? Or is the absolute number about the same?

If there’s been a surge of people who begin adding items to the cart but the number of people completing a purchase has remained the same, maybe something about the additional shoppers makes them less likely to convert. Simone notices the number of people beginning a cart has gone up slightly.

She then asks around to see what might have changed. Have there been any promotions? Any new products launched? Could there be a seasonal impact? Has anything changed to the checkout process? Have prices been adjusted?

(Note: this could vary greatly depending on the business and the range of products.)

Simone learns there’s been a small change in the checkout flow. Instead of just listing the items in the cart, they now show a picture of each item. She thinks this is unlikely to be the issue, but the change does coincide with the time the rate of cart abandonment increased.

To better understand the impact of this change, she splits the checkout process into different steps. Looking at the checkout process, the drop off isn’t on the page that’s been affected. In fact, more people are making it to the next step so this seems unlikely to be the culprit.

Next up, Simone looks for any seasonal impacts by comparing this week’s abandonment rate to the same week in previous years. She also takes a quick look at the calendar for any possible clues. Since other metrics like sessions and email open rates weren’t affected, she concludes the cause wasn’t seasonality.

Then she remembers a promotion recently ended. Could that be the cause? Perhaps when people realize the promotion is over, they’re more likely to abandon their cart. Simone looks at the proportion of checkouts using the promotion code before the increase in cart abandonment. It was only 5%. The change in abandonment rate is triple that, so this might be a contributing factor, but it’s still unlikely to be the primary culprit.

Could it be changes to the inventory? Simone breaks down abandonment by product, but performance is fairly consistent across all products.

While thinking of other possible causes, she reviews the checkout process once more. The big drop off is on the page that first shows the shipping price. Simone recalls making some cosmetic changes to the product pages and realizes timeline aligns perfectly with the increased abandonment rate. Going back over the changes she notices the shipping details are much less prominent than they used to be.

Her new hypothesis is potential customers are abandoning their carts because they’re frustrated by the price. Their expectations are set at a lower price based on the product pages. As soon as they see the full price (including shipping), they’re more likely to drop off.

At this point, Simone has gone as far as she can with her data. It’s time to test a change and track the results. She could run an A/B test, revert the product change to the previous design, or perhaps try a version with the shipping included.

Simone now feels confident in her analysis and can take action to solve her cart abandonment problem.

Mobile app example

Allie, the product manager, is responsible for understanding why the activation rate for her mobile productivity app has decreased.

  • Core problem: After the initial download, the rate of people tracking their tasks in the app has decreased. How can we increase the activation rate?

  • Hypotheses: The activation rate has decreased because…

    • a change in the product makes people less likely to activate.
    • a new (and different) group of people have begun trying the product.

She notices the proportion of downloaders who activate (open and begin using the app) have been steadily decreasing for the past 3 months (compared to the total number of downloads).

Allie decides to get more context by looking at the absolute numbers before generating lots of possible causes. She discovers the absolute number of downloads has gone up significantly while the number of people activating has only increased slightly.

She’s a little bit relieved that both absolute numbers are increasing.

(Note: this may or may not be a problem depending on your acquisition channels. Ultimately it depends on if money is being wasted on the additional signups.)

Allie proceeds with two broad hypotheses.

  1. A change in the product makes people less likely to activate.

  2. A new (and different) group of people have begun trying the product. This might explain the additional individuals downloading who are less likely to activate.

Allie quickly determines there weren’t any changes to the initial experience in the app that correlate with the drop in activation.

Now she looks more closely at who is downloading the app and if the demographics have changed. She splits downloaders by region over time. There’s a small increase in downloads from lower activating regions, but some quick math proves this is nowhere near enough to explain difference in activation rate.

Next, Allie splits the downloads by channel (e.g. app store search, social ads, referrals, etc.) and notices that downloads where referrals were the channel have significantly increased. It seems roughly the same number as increased downloads she noted earlier.

Digging deeper, she splits the download-to-activation rate by channel. This channel has a lower conversion rate. Allie chats with the marketing team and discovers the app was featured in a well known, high traffic article. And it’s not costing anything! What untapped potential might be here? Is there an opportunity to boost the activation rate from this source? What else could we be doing?

Allie and the marketing manager are now able to have a data-informed conversation about what action to take next. Thanks to some quick data analysis, what began as a problem has become an opportunity.

It’s your turn…

As you analyze your own data, remember to consider both the truthfulness of the difference (statistical significance) as well as the practical meaning of that difference (practical significance).

With more data driving operations in a business than ever before, leaders need to cultivate a culture that is data-driven, instead of believing in their gut instincts.

Ronald van Loon, data scientist, speaker, author, and founder

Pro tips (because data can be sneaky)

Pro tips

When we’re in the middle of our analysis, the data can sometimes play tricks on us. The truth is, even the most experienced statisticians have to watch out for these tricks - called data fallacies. The following tips will help you avoid some of the most common data fallacies.

A common fallacy is assuming a dataset is trustworthy - until it’s discovered later in analysis that it’s not. Flip that. Make sure your data is trustworthy before you begin analysis.

Tamara Dull, Director of Emerging Tech at SAS Institute

Keep your data in context to avoid cherry picking. In other words, ignore your personal bias and motives when analyzing your data. Cherry picking is selecting only the bits of data that support your claim while discarding the parts that don’t.

For example, we might notice the support tickets for a new feature have increased in response time. If we stop there, we might conclude that the problem is the new product feature. But if we look at all the support tickets over the last two months, we may see an overall increased response time because the number of tickets have increased.

It is absolutely essential that one should be neutral and not fall in love with the hypothesis.

David Douglass, American Physicist

Start with a hypothesis to avoid data dredging. When looking for the cause of a problem, it might be tempting to dig through the data until a pattern emerges. At first glance, the pattern might be statistically significant, but further testing (e.g. checking if the trend continues, looking at related metrics, etc.) could reveal the pattern as a false positive. Data dredging is the failure to acknowledge that the correlation was in fact the result of chance. The way to avoid this fallacy is to begin with a hypothesis before analyzing your data, check related metrics, and/or test to see if the trend continues.

For example, we might see that the activation rate for our mobile app has decreased since adding new pictures and a more detailed description to the app store. Before assuming the two are correlated, we can consider other related metrics or factors - like winter holidays. It just so happened that we updated the text and images right before the holidays - a slower period overall for the business - creating the illusion of correlation.

If you torture the data long enough, it will confess to anything.

Ronald Coase, Nobel prize winning economist

Distinguish correlation from causation to avoid false causality. It’s easy to assume that because two events happen at the same time (correlate), one must cause the other. Not so fast! It’s always best to gather more data and look at possible third party causes. Sometimes patterns that seem correlated may be correlated to a third independent factor and not to each other.

For example, we see that potential customers who abandon their online shopping cart tend to have a low total cart value (total cost of items in their cart when abandoned). At this point, we don’t have enough data to know if that’s a consistent correlation, result of chance, or perhaps caused by another factor. Digging deeper we may realize that the cost of shipping is causing higher cart abandonment rates since free shipping is only available for orders that exceed a certain minimum cart value.

Shallow men believe in luck or in circumstance. Strong men believe in cause and effect.

Ralph Waldo Emerson, American essayist, lecturer, and poet

Solve problems, make smart decisions

Solve problems, make smart decisions

This lightweight process for analyzing data can help you quickly solve problems and make smart decisions. Whether you’re leading a team or reporting back to an executive, you can be confident in your data-backed insights.

Whether one is fighting the war against cancer, or the battle to stop infectious diseases, or the cybersecurity war, or the battle to win the hearts of your customers, or pricing wars within competitive marketplaces, or engaged in other battles, the winners will most certainly be those who use data effectively to make smart business decisions – those who truly appreciate that knowledge is power.

Kirk D. Borne, PhD, Astrophysicist, Principal Data Scientist at Booz Allen

After putting out the fire that sparked your data inquiry, you’ll want to update any other teams that may be involved or impacted. Jotting down a brief summary (including the problem, what the data showed, and the resulting decision/action) will give others valuable context and make it easy for you to reference in the future should a similar situation arise. It’s also beneficial to have a record of your analysis (even if it’s informal) in case others have additional questions or want to look at the data themselves.

Problem solved.