More on this book
Community
Kindle Notes & Highlights
Read between
May 23 - November 1, 2022
Companies end up in the build trap when they misunderstand value. Instead of associating value with the outcomes they want to create for their businesses and customers, they measure value by the number of things they produce.
When companies do not understand their customers’ or users’ problems well, they cannot possibly define value for them. Instead of doing the work to learn this information about customers, they create a proxy that is easy to measure. “Value” becomes the quantity of features that are delivered, and, as a result, the number of features shipped becomes the primary metric of success.
You have to get to know your customers and users, deeply understanding their needs, to determine which products and services will fulfill needs both from the customer side and the business side.
Whatever your metric, it’s important to have a system of metrics, not just one, to guide product decisions.
Retention is a lagging indicator, which is impossible to act on immediately. It will be months before you have solid data to show that people stayed with you. That is why we also need to measure leading indicators like activation, happiness, and engagement.
Leading indicators tell us whether we’re on our way to achieving those lagging indicators like retention.
Amplitude, Pendo.io, Mixpanel, Intercom, and Google Analytics are all data platforms.
You won’t be able to set success metrics without investigating the problem. This is why we first need problem exploration, a process we explain in the next chapter. The success metrics you set will be relevant to that problem you discover and the solution you implement to solve it.
User research, observations, surveys, and customer feedback are all tools that you can harness to better explore the problem from a user standpoint.
Instead, teams should be working like the team at Marquetly, by setting the success criteria before launch, while measuring and iterating until they reach it. Version 1 should be looked at as a hypothesis, just like any other work. And, if we launch the feature and it is not hitting our goals, we need to be comfortable rolling it back and trying something else.