More on this book
Community
Kindle Notes & Highlights
by
Marty Cagan
Read between
July 16 - September 30, 2022
To set your expectations, teams competent in modern discovery techniques can generally test on the order of 10–20 iterations per week. This may sound like a lot to you, but you'll soon see that's not so hard at all with modern discovery techniques.
The idea is that rather than communicate the benefits in a press release format, you describe them in the format of a customer letter written from the hypothetical perspective of one of your product's well‐defined user or customer personas.
In practice, I keep running into entrepreneurs and product leaders who are focused on secondary risks rather than primary risks.
Mostly, however, I think the major reason is that it's human nature for people to focus more on those areas they feel they can control and are knowledgeable about.
The reason I love the customer discovery program technique so much is because it is designed to produce these reference customers. We are discovering and developing a set of reference customers in parallel with discovering and developing the actual product.
But I will also say that if you do this technique, I consider it the single best leading indicator of future product success.
For products and services aimed at businesses, I was taught years ago that the key number is six reference customers. This is not meant to be statistically significant—it is meant to instill confidence—and I have found that number has held up over time. Again, more than six would be even better, but we shoot for six because each one is a lot of work.
Note that, in many cases, you'll get people who say they are extremely interested in this product, but they first want to see your references. When you explain you're looking to work with them to become one of those references, they will probably say they are just too busy, but to come back once you have the references. That's fine. They're a useful lead. But we are looking for those customers that are so hungry and desperate for a solution that they will absolutely make time for this. Every market has this segment.
That said, if you find you are having real trouble recruiting even four or five prospective customers for this effort, then it's very possible you're chasing a problem that isn't that important, and you will almost certainly have a very hard time selling this product. This is one of the very first reality checks (aka demand validation) to make sure you're spending your time on something worthwhile. If customers aren't interested in this problem, you may want to rethink your plans.
Frequency. Establish a regular cadence of customer interviews. This should not be a once‐in‐a‐while thing. A bare minimum would be two to three hours of customer interviews per week, every week.
I would argue that this hour consistently yields a great return on your time. It's critical to learn the answers to these key questions. However, I am a big fan of taking the opportunity of a customer interview to also try out some of our product ideas. We do that after we've learned the answers to these key questions, but it's such a great opportunity I really like to take advantage of it.
With this technique, you become the concierge. You do what the user or customer needs done for them. You may have to ask them to train you first, but you are in their shoes doing the tasks they would do.
A concierge test requires going out to the actual users and customers and asking them to show you how they work so that you can learn how to do their job, and so that you can work on providing them a much better solution.
Where a lot of novice product people go sideways is when they create a high‐fidelity user prototype and they put it in front of 10 or 15 people who all say how much they love it. They think they've validated their product, but unfortunately, that's not how it works.
The main difference today is that we do usability testing in discovery—using prototypes, before we build the product—and not at the end, where it's really too late to correct the issues without significant waste or worse.
This has been the case since at least 2000 when Don't Make Me Think was first published. Is Cagn just unaware?
You might experiment with pricing, positioning, and marketing, but you eventually conclude that this is just not a problem people are concerned enough about. The worst part of this scenario is that, in my experience, it's so easily avoided.
Others have also been writing about how to apply these techniques in enterprises, but on the whole, I have not been particularly impressed with the advice I've seen. Too often, the suggestion is to carve out a protected team and provide them some air cover so they can go off and innovate. First of all, what does this say about the people not on these special innovation teams? What does this say about the company's existing products? And, even when something does get some traction, how well do you think the existing product teams will accept this learning? These are some of the reasons I'm not
...more
I believe it's a non‐negotiable that we simply must continue to move our products forward, and deliver increased value to our customers. That said, we need to do this in a responsible way. This really means doing two really big things—protect your revenue and brand, and protect your employees and customers.
During the usability test, we test to see whether the user can figure out how to operate our product. But, even more important, after a usability test the user knows what your product is all about and how it's meant to be used. Only then can we have a useful conversation with the user about value (or lack thereof). Preparing a value test therefore includes preparing a usability test. I described how to prepare for and run a usability test in the last chapter, so for now let me again emphasize that it's important to conduct the usability test before the value test, and to do one immediately
...more
The main challenge in testing value when you're sitting face to face with actual users and customers is that people are generally nice—and not willing to tell you what they really think. So, all of our tests for value are designed to make sure the person is not just being nice to you.
In my experience, the worst thing about product in the past was its reliance on opinions. And, usually, the higher up in the organization the person was who voiced it, the more that opinion counted.
Opinions are the worst thing about product management. Everything Cagan is reacting to is driving at replacing opinions with data
It's also important for tech product managers to have a broad understanding of the types of analytics that are important to your product. Many have too narrow of a view. Here is the core set for most tech products: User behavior analytics (click paths, engagement) Business analytics (active users, conversion rate, lifetime value, retention) Financial analytics (ASP, billings, time to close) Performance (load time, uptime) Operational costs (storage, hosting) Go‐to‐market costs (acquisition costs, cost of sales, programs) Sentiment (NPS, customer satisfaction, surveys)
My own teams—and every team I can think of that I've ever worked with—have been doing this for so long now that it's hard to imagine not having this information. It's hard for me to even remember what it was like to have no real idea how the product was used, or what features were really helping the customer, versus which ones we thought had to be there just to help close a sale.
I'm very big on radically simplifying products by removing features that don't carry their own weight. But, without knowing what is being used, and how it's being used, it's a very painful process to do this when you don't know what's really going on. We don't have the data to back up our theories or decisions, so management (rightfully) balks.
We agree on quite a few actions like removing dead weight (here) or resisting “flying blind” (above), yet tend to disagree as to why and how we know. Cagan comes at this from a flat numbers will tell the truth viewpoint. Is it being used? If so, it is valuable. If not, it can be safely discarded.
I tend to look at things from a perspective informed by personas. What features are important to what personas and why? Having that lens first frames how to interpret the data.
My personal view is that you should start from the position that you simply must have this data, and then work backward from there to figure out the best way to get it.
When we talk about validating feasibility, the engineers are really trying to answer several related questions: Do we know how to build this? Do we have the skills on the team to build this? Do we have enough time to build this? Do we need any architectural changes to build this? Do we have on hand all the components we need to build this? Do we understand the dependencies involved in building this? Will the performance be acceptable? Will it scale to the levels we need? Do we have the infrastructure necessary to test and run this? Can we afford the cost to provision this?
If, however, the engineers have been following along as the team has tried out these ideas with customers (using prototypes) and seen what the issues are and how people feel about these ideas, the engineers have probably already been considering the issues for some time. If it's something you think is worthwhile, then you need to give the engineers some time to investigate and consider it.
Building a business is always hard. You must have a business model that's viable. The costs to produce, market and sell your product must be sufficiently less than the revenue your product generates. You must operate within the laws of the countries you sell in. You must hold up your end of business agreements and partnerships. Your product must fit within the brand promise of your company's other offerings. This is what is really meant by being the CEO of the product. You need to help protect your company's revenue, reputation, employees, and customers you've worked so hard to earn.
Here's what I advocate in this case: Plan to continue with your existing roadmap process for six to 12 months. However, starting immediately, every time you reference a product roadmap item, or discuss it in a presentation or meeting, be sure to include a reminder of the actual business outcome that feature is intended to help.
In many product companies, just about anyone and everyone feels like they have a say in the products. They certainly care about the product, and they often have many ideas—either derived from their own use, or from what they hear from customers. But, regardless of what they think, we would not consider most of them to be stakeholders. They are just part of the community at large—another source of input on the product, along with many others.
First, presentations are notoriously terrible for testing business viability. The reason is that they are far too ambiguous. A lawyer needs to see the actual proposed screens, pages, and wording. A marketing leader wants to see the actual product design. A security leader needs to see exactly what the product is trying to do. Presentations are terrible for this. In contrast, high‐fidelity user prototypes are ideal for this. I plead with product managers in larger companies to not trust a sign‐off on anything other than a high‐fidelity prototype. I have seen far too many times where the execs
...more
Good teams are skilled in the many techniques to rapidly try out product ideas to determine which ones are truly worth building. Bad teams hold meetings to generate prioritized roadmaps.
While we've talked about product teams and techniques for discovering successful products, I hope you've noticed that what we're really talking about in this book is product culture. I've described to you how great product companies think, organize, and operate.
What does it really mean to have a strong innovation culture? Culture of experimentation—teams know they can run tests; some will succeed and many will fail, and this is acceptable and understood. Culture of open minds—teams know that good ideas can come from anywhere and aren't always obvious at the outset. Culture of empowerment—individuals and teams feel empowered to be able to try out an idea. Culture of technology—teams realize that true innovation can be inspired by new technology and analysis of data, as well as by customers. Culture of business‐ and customer‐savvy teams—teams,
...more