More on this book
Community
Kindle Notes & Highlights
by
Marty Cagan
Read between
August 25, 2022 - July 22, 2023
For example, first develop six references for the financial services industry, then six for the manufacturing industry, and so on. Or you can expand geographically in this same manner (for example, first develop six references for the United States, six for Germany, and then six for Brazil, and so on).
Your job is to dive deep with each of the six customers and identify a single solution that works well for all six customers.
You want a partner in coming up with the product.
it's important that the members of the customer discovery program be the right set, and no more than eight.
One of the most common techniques for assessing product/market fit is known as the Sean Ellis test. This involves surveying your users (those in your target market that have used the product recently, at least a couple times, and you know from the analytics that they've at least made it through to the core value of the product) and asking them how they'd feel if they could no longer use this product. (The choices are “very disappointed,” “somewhat disappointed,” “don't care,” and “no longer relevant because I no longer use.”).
The general rule of thumb is that if more than 40 percent of the users would be “very disappointed,” then there's a good chance you're at product/market fit.
But in every user or customer interaction, we always have the opportunity to learn some valuable insights. Here's what I'm always trying to understand: Are your customers who you think they are? Do they really have the problems you think they have? How does the customer solve this problem today? What would be required for them to switch?
When you first start the actual usability test, make sure to tell your subject that this is just a prototype, it's a very early product idea, and it's not real. Explain that she won't be hurting your feelings by giving her candid feedback, good or bad. You're testing the ideas in the prototype, you're not testing her. She can't pass or fail—only the prototype can pass or fail.
See if they can tell from the landing page of your prototype what it is that you do, and especially what might be valuable or appealing to them.
When testing, you'll want to do everything you can to keep your users in use mode and out of critique mode. What matters is whether users can easily do the tasks they need to do. It really doesn't matter if the user thinks something on the page is ugly or should be moved or changed. Sometimes misguided testers will ask users
questions like “What three things on the page would you change?” To me, unless that user happens to be a product designer, I'm not really interested in that. If users knew what they really wanted, software would be a lot easier to create. So, watch what they do more than what they say.
So many companies and product teams think all they need to do is match the features (referred to as feature parity), and then they don't understand why their product doesn't sell, even at a lower price. The customer must perceive your product to be substantially better to motivate them to buy your product and then wade through the pain and obstacles of migrating from their old solution.
I argue that qualitative testing of your product ideas with real users and customers is probably the single most important discovery activity for you and your product team.
My view is that, if you're going to put a feature in, you need to put in at least the basic usage analytics for that feature. Otherwise,
how will you know if it's working as it needs to?
Here is the core set for most tech products: User behavior analytics (click paths, engagement) Business analytics (active users, conversion rate, lifetime value, retention)
Financial analytics (ASP, billings, time to close) Performance (load time, uptime) Operational costs (storage, hosting) Go‐to‐market costs (acquisition costs, cost of sales, programs) Sentiment (NPS, customer satisfaction, surveys)