Discovery Diaries: Validation [As a PM]
Part Three
This is part three of the ‘Discovery Diaries’ series. You can find ‘Part 1’ or ‘Part 2'.
So after you’ve gathered all your data and ideated on some of your ideas with the team, you naturally want to know whether these ideas would potentially work. This is where you would start to validate those ideas.
What is ‘Validation’?
Product validation is the process of testing your idea to get feedback or data on your product’s viability.
As suggested above, it's a way of testing the viability of a product or feature before committing resources and potentially burning through your budget. So many ideas lead to no measurable impact or positive results. Nobody can predict which ideas will be successful and which will not and many companies use weak opinions or old processes.
This is by far one of the biggest sources of waste in the industry and as Rich Mironov points out in his article companies could save themselves massive amounts of resources and improve their ROI if they took product discovery and validation seriously.
How can one ‘Validate’ an idea?
There are many known ways to validate an idea and you could say that we’ve been validating all along, across each stage of the product discovery cycle.
For example in our data gathering and ideation stages we used a number of validation techniques to move forward with our thinking and the problems we wanted to focus:
- User interviews —speaking with customers is always so valuable at multiple stages of discovery so its always good to touch base whatever stage your at
- Data Analysis — the same goes for your data analytics and understanding at each stage whether this aligns to your understanding of your current problems
- Stakeholder reviews — reviewing ideas with internal stakeholders to gauge whether it's a good idea to start working on but also to help uncover any major business risks
- Mapping — we discussed about various brainstorming techniques in order to draw similarities and themes amongst the data or research to help find any hidden key insights or bigger risks
These are methods are alot faster in trying to validate. Remember, your trying to validate quickly and the most cost effective way. But if you feel like you want something more substantial or you’ve gathered enough data already, you can start to test those ideas with real users.
It might be to see whether it works or you may want to come up with a hypothesis with success and failure criteria. Either way, you can test your ideas with users in a number of ways:
- Usability tests — is where you will ask a user to test the new product or feature under the guidance of a tester to see whether they conduct actions based on your hypothesis. This is likely done with mockups or prototypes using software like invision where you will be able to analsye recordings, understand any pain points and understand how users interact.
- Smoke tests (or fake door tests)— are also a great way to test the demand for a non-exsistent new product. They basically allow you to mimic a real prodcut and incentivise user to sign up or opt in to a product which will signal there is some type of demand and how desirable it is. Once they sign up, you would inform that the product is not ready just yet.
- Dogfood tests — is basically testing the new product with your company before users. Its common place for teams to do this to make sure there are no bugs, functionality or just usability in general. Most probably your company is not your target market but its a great place to start.
- Early-adopter tests — where you would give early access to your new product to early adopters who have been there from the start. These users would most probably be willing to use an incomplete product and give you invaluable feedback in exchange for getting their hands on the new product first.
If you want to take this a step further, you can start conducting some experiments. Usually this is what everyone assumes you’ll be doing during a discovery phase but in reality, these experiments take time to reach any significance and can be quite costly.
Also, you could say that tests are experiments also but in this context were using the word experiment as a more scientific approach within a controlled experiment.
- A/B tests — are always to the go to experiment for PMs where you compare users interactions of two versions of the product that are different in one variable. Usually A is the original version and B is the new version with the enhanced change. You would then allocate a % of traffic to this test to understand whether version B increases conversion or not. But you usually need to reach statistical significance in order to prove your theory which usually requires alot of volume over a long period. You could then extend to this to multiple versions A/B/n to test various versions.
- Multi-armed bandit experiment — is a new experiment that I’ve been looking into which is quite interesting. Rather than having a fixed traffic split across the variants, the traffic is adjusted depending on past observations. Its cheaper to implement, it solves your sample size issue, and its better for time-sensitive features, campaigns, or push notifications.
- Percent experiments — always a great test when rolling out a new product or large feature is to release to a small % of yoru user base (typically 1–5%) and then see how it performs compared to the current version. If you see success then gradually roll out to rest of the user base.
- Holdback experiments — as you continue to full rollout you can always leave a small % of the user base with the old version to monitor the effects overt time which can actually be very valuable.
Overall, there are many ways you can validate your idea on a number of levels. Once you think your there, you can either move it into delivery and start developing it or release the new version to the majority of your user base.
Thanks for reading — 👏 if you want more. Follow me on Twitter!