Understanding & Utilising Data [As a PM]

I mean what do you look for? Is this good or bad? Do we actually know what's happening here? Is this of value? Let's explore…

Alex Magee
6 min readJun 23, 2022

As a PM, you will be consuming, analysing and synthesising lots and lots and lots and…(think we know where we're going here)…LOTS of data.

But the question is, do we actually understand what's going on? Are we just literally looking at it? Is this normal? or do we have an anomaly? Have we improved or have we gone backwards?

Daily Decisions

As a PM you will be faced with lots of decisions to make on a daily basis.

We make on average 35000 decisions a day (227 on food alone!) — Sahakian & Labuzetta, 2013

That's a lot of decisions … but also a lot of data. So in the context of the product, it's vital that you understand what your looking at, in order to guide you in the right direction.

Were wired to search for patterns or make sense of data but it can be incredibly risky if you misinterpret and misuse data, which might lead to wasted time and resources.

Metrics

A lot of companies rely heavily on metrics to create, build, test and launch products and PMs love to analyse them…or try to analyse them at least.

But how do you effectively understand and utilise them? Answer — It's very difficult! The setting up of events and attributes is very long and complex in itself, which requires development and resource.

Even if you’ve set up all your metrics and have all this valuable data flowing in — what do we do with all this data? do we understand it? how are we performing?

Start with strategy

As previously mentioned in ‘How to create a product strategy’, your strategy is a plan of short, medium and long-term goals which will also include your success metrics across these time horizons. This will allow you to understand how your progressing with your objectives or goals.

When translating metrics into valuable indicators which will help you reach your goals, you get what is otherwise known as ‘Key Performance Indicators. KPIs are metrics which you as a team or business will gauge as critical initiatives, goals or objectives.

Commonly, you have two types of KPIs:

  • Lagging indicators — are KPIs that are typically long-term orientated, easy to measure but hard to improve such as revenue or customer retention.
  • Leading indicators — are KPIs that are typically short-term orientated, hard to measure but easy to improve such as the number of users who clicked the ‘resend’ button on the 4th of July

As a result, you will generally be working on projects which affect the short-term horizon (leading indicators) but are directly tied with the longer-term vision (lagging indicators) and the overall strategy of the business.

So when looking at your metrics or your KPIs, you should consider that your day-to-day work will mostly affect ‘leading indicators’ which are very specific or sensitive to that project or piece of work.

As a result, its important to understand whether your short-term KPIs are translating into improving your longer-term KPIs which will confirm that a pattern is occurring and your on the right path. Mapping out these focused or chosen metrics can help you visualise the relationships and understand whether a ripple effect is occuring.

Benchmarks

Another important consideration is: how do your metrics or KPIs compare to the rest of the market? Is this bad or good? A team could be tirelessly chasing to move a metric but in reality, it's not actually performing that badly.

As a result, it might be worth doing some competitor or market analysis to see how the industry performs. This can be done with a variety of tools, market research or in some cases benchmark reports such as the ‘Mixpanel Benchmark Report’ which gives you insight into billions of events, and hundreds of products across a variety of industries (but these types of reports are quite rare — Mixpanel please continue these).

They delve into how products perform in five areas: reach, activation,
active usage, engagement, and retention
which is highly valuable in the product world. This is just one example, but I'm sure there are many more.

In turn, this will allow you to understand the improvement in your metrics or understand where you are as a company in comparison to the rest of the market or your competitors.

Experiments

As we all know, product teams love to research in order to validate their ideas as discussed in Discovery Diaries: Validation [As a PM].

Frequently, this research will contain experiments like an a/b test or a multi-variate test which will rely on statistical methods. But many teams do not have expertise to interpret their results correctly and therefore it might be difficult to understand whether it's been a success or not.

This risky approach can lead to misunderstood assumptions and heading your team in the wrong direction.

For example the a/b test — the test will show a sample (%) of users with two variants of the same feature. You will then measure the conversion or engagement of each variant which in turn will result in a winner.

But, if the sample is too small, then this will not be a good indicator of your overall user base and therefore not give you statistical significance due to randomness in small samples.

Statistical significance will be reached when your sample size is large enough to represent your overall user base. As a result, you will need to determine what sample size you need to reach in order to reach significance which can be done easily through an online calculator (Thank you internet).

Once you reach a high confidence level, which is commonly 95%, you can pretty confident that your results are not random. Without reaching these high confidence levels you could risk receiving false positives or false negatives.

In summary, you just need to be careful when conducting statistical experiments and to make sure you really understand whether you have a reached statitiscal significance and not a false positive or negative. This is vital for both your team and business.

Qualitative

Another very important consideration when understanding and utilising data is qualitative data. This usually involves data sources such as interviews, beta test feedback, interview transcripts, survey questions or online reviews.

Where as quantitative data would usually involve more traditional sources such as analytics reports, metrics, experiment data or market research. These would usually be expressed in numerical values.

Quantitative data can tell us about what people are using and how we could optimise something that already exists where as qualitative data can tell us about how users feel towards the product and what potential opportunities exist.

A lot of people might think that qualitative data is not as powerful as quantitative data as its subjective. But when interviewing customers and asking them great questions, it can be incredibly powerful and valuable to the business.

But this can vary if the research and questions are not correctly positioned which causes response bias and will lead to a nonrandom or inaccurate response. This is usually caused by poor survey or interview question design.

As a result, when interpreting qualititive data, you need to make sure that your questions do not contain any bias which can lead you to utilising inaccurate results.

Conclusion

So to summarise everything above, I think its safe to say you just really need to be careful when utilising data as a PM. Maybe make a checklist, speak to some colleagues or friends who might be specialists in your chosen data set to really confirm that your results are conclusive.

As mentioned previously, its a risky approach when confidence levels are not high. This can be applied to all types of data. Make sure your confident in your results before moving forward.

Thanks for reading — 👏 if you want more. Follow me on Twitter!

--

--

Alex Magee

A PM attempting to write about: Product | Data | Design 💡