More on this book
Community
Kindle Notes & Highlights
by
Marty Cagan
Read between
October 30 - November 1, 2022
Risks are tackled up front, rather than at the end. In modern teams, we tackle these risks prior to deciding to build anything. These risks include value risk (whether customers will buy it), usability risk (whether users can figure out how to use it), feasibility risk (whether our engineers can build what we need with the time, skills, and technology we have), and business viability risk (whether this solution also works for the various aspects of our business—sales, marketing, finance, legal, etc.). Products are defined and designed collaboratively, rather than sequentially.
The purpose of product discovery is to quickly separate the good ideas from the bad. The output of product discovery is a validated product backlog. Specifically, this means getting answers to four critical questions: Will the user buy this (or choose to use it)? Can the user figure out how to use this? Can our engineers build this? Can our stakeholders support this?
So, we use prototypes to conduct rapid experiments in product discovery, and then in delivery, we build and release products in hopes of achieving product/market fit, which is a key step on the way to delivering on the company's product vision.
The MVP should be a prototype, not a product. Building an actual product‐quality deliverable to learn, even if that deliverable has minimal functionality, leads to substantial waste of time and money, which of course is the antithesis of Lean. I find that using the more general term prototype makes this critical point clear to the product team, the company, and the prospective customers. So, in this book, I talk about different types of prototypes being used in discovery and products being produced in delivery.
The honest truth is that the product manager needs to be among the strongest talent in the company. If the product manager doesn't have the technology sophistication, doesn't have the business savvy, doesn't have the credibility with the key executives, doesn't have the deep customer knowledge, doesn't have the passion for the product, or doesn't have the respect of their product team, then it's a sure recipe for failure.
Most product managers start their day with half an hour or so in the analytics tools, understanding what's been happening in the past 24 hours.
To summarize, these are the four critical contributions you need to bring to your team: deep knowledge (1) of your customer, (2) of the data, (3) of your business and its stakeholders, and (4) of your market and industry.
In strong teams today, the design informs the functionality at least as much as the functionality drives the design. This is a hugely important concept. For this to happen, we need to make design a first‐class member of the product team, sitting side by side with the product manager, and not a supporting service.
It is your job to make sure they feel like missionaries and not mercenaries. You do this by involving them deeply in the customer pain you are trying to solve and in the business problems you face. Don't try to shelter them from this—instead, share these problems and challenges very openly with them. They will respect you more for it, and, in most cases, the developers will rise to the challenge.
Especially with the qualitative learning, some of our research is generative, which is understanding the problems we need to solve; and some of our research is evaluative, which is assessing how well our solutions solve the problem.
instead of determining placement based solely on the price paid, they would use a formula that multiplied the price paid per impression with the ad's performance (click‐through‐rate) to determine placement, so that the best‐performing ads—the ones most likely to be most relevant to users—would rise to the top, and the worst ads would be unlikely to be displayed at all, even if they were sold at a higher price.
The minimum size for a product team is usually two engineers and a product manager, and if the team is responsible for user‐facing technology, then a product designer is needed, too. Fewer than that is considered below critical mass for a product team. On the other end, it's really difficult for one product manager and product designer to keep more than about 10–12 engineers busy with good stuff to build. Also, in case it's not clear, it's important that each product team have one, and only one, product manager.
The difference between vision and strategy is analogous to the difference between good leadership and good management. Leadership inspires and sets the direction, and management helps get us there. Most important, the product vision should be inspiring, and the product strategy should be focused.
we soon realized that the real reason sellers loved us was because we provided them with buyers. This realization led to a critical principle that stated, “In cases where the needs of the buyers and the sellers conflict, we will prioritize the needs of the buyer, because that's actually the most important thing we can do for sellers.”
the famous General George Patton quote I mentioned earlier: “Never tell people how to do things. Tell them what to do, and they will surprise you with their ingenuity.”
If you're a product manager—especially at a large company—and you're not good at evangelism, there's a very strong chance that your product efforts will get derailed before they see the light of day.
The purpose of product discovery is to address these critical risks: Will the customer buy this, or choose to use it? (Value risk) Can the user figure out how to use it? (Usability risk) Can we build it? (Feasibility risk) Does this solution work for our business? (Business viability risk)
fall in love with the problem, not the solution. Why is this so important? Because, more often than not, our initial solutions don't solve the problem—at least not in a way that can power a successful business. It usually takes trying out several different approaches to a solution before we find one that solves the underlying problem.
The idea is to answer four key questions about the discovery work you are about to undertake: What business objective is this work intended to address? (Objective) How will you know if you've succeeded? (Key results) What problem will this solve for our customers? (Customer problem) What type of customer are we focused on? (Target market)
So much product work fails because it tries to please everyone and ends up pleasing no one.
there are a few techniques that are central to how Amazon builds product, and one of them is referred to as the working backward process, where you start the effort with a pretend press release.
focus on a somewhat larger number of consumers (on the order of 10–50) that we engage with to get them to the point that they are loving our product.
The general rule of thumb is that if more than 40 percent of the users would be “very disappointed,” then there's a good chance you're at product/market fit.
A Wizard of Oz prototype combines the front‐end user experience of a high‐fidelity user prototype but with an actual person behind the scenes performing manually what would ultimately be handled by automation.
define in advance the set of tasks that you want to test. Usually, these are fairly obvious. If, for example, you're building an alarm clock app for a mobile device, your users will need to do things like set an alarm, find and hit the snooze button, and so on. There may also be more obscure tasks, but concentrate on the primary tasks—the ones that users will do most of the time.
Good product managers know they will get the product wrong initially and that nobody gets it right the first time. They know that learning from these tests is the fastest path to a successful product.
When you first start the actual usability test, make sure to tell your subject that this is just a prototype, it's a very early product idea, and it's not real. Explain that she won't be hurting your feelings by giving her candid feedback, good or bad.
See if they can tell from the landing page of your prototype what it is that you do, and especially what might be valuable or appealing to them. Again, once they jump into tasks, you'll lose that first‐time visitor context, so don't waste the opportunity.
keep your users in use mode and out of critique mode. What matters is whether users can easily do the tasks they need to do. It really doesn't matter if the user thinks something on the page is ugly or should be moved or changed.
avoid giving any help or leading the witness in any way. If you see the user scrolling the page up and down and clearly looking for something, it's okay to ask the user what specifically she's looking for, as that information is very valuable to you. Some people ask users to keep a running narration of what they're thinking, but I find this tends to put people in critique mode, as it's not a natural behavior.
tell them what they're doing: “I see that you're looking at the list on the right.” This will prompt them to tell you what they're trying to do, what they're looking for, or whatever it may be. If they ask a question, rather than giving a leading answer, you can play back the question to them. They ask, “Will clicking on this make a new entry?” and you ask in return, “You're wondering if clicking on this will make a new entry?” Usually, they will take it from there because they'll want to answer your question: “Yeah, I think it will.” Parroting also helps avoid leading value judgments. If you
...more
fake door demand test. The idea is that we put the button or menu item into the user experience exactly where we believe it should be. But, when the user clicks that button, rather than taking the user to the new feature, it instead takes the user to a special page that explains that you are studying the possibility of adding this new feature, and you are seeking customers to talk to about this.
landing page demand test. We describe that new offering exactly as we would if we were really launching the service. The difference is that if the user clicks the call to action, rather than signing up for the trial (or whatever the action might be), the user sees a page that explains that you are studying the possibility of adding this new offering, and you'd like to talk with them about that new offering if they're willing.
If our customer service, professional services, or sales staff are blindsided by constant change, it makes it very hard for them to do their jobs and take good care of customers.
If you try to do a value test without giving the user or customer the opportunity to learn how to use the product, then the value test becomes more like a focus group where people talk hypothetically about your product, and try to imagine how it might work.
focus groups might be helpful for gaining market insights, but they are not helpful in discovering the product we need to deliver (see Product Discovery Principle #1).
One technique I like for gauging value is to see if the user would be willing to pay for it, even if you have no intention of charging them for it.
But there are other ways a user can “pay” for a product. You can see if they would be willing to pay with their reputation. You can ask them how likely they'd be to recommend the product
data often catches us off guard. We have a set of assumptions about how the product is used—most of which we are not even conscious of—and when we see the data, we're surprised that it doesn't track with those assumptions. It's these surprises that lead to real progress.
User behavior analytics (click paths, engagement) Business analytics (active users, conversion rate, lifetime value, retention) Financial analytics (ASP, billings, time to close) Performance (load time, uptime) Operational costs (storage, hosting) Go‐to‐market costs (acquisition costs, cost of sales, programs) Sentiment (NPS, customer satisfaction, surveys)
I'm very big on radically simplifying products by removing features that don't carry their own weight.
Holding a weekly planning meeting where you throw a bunch of ideas at the engineers—and demand they give you some sort of estimate either in time, story points, or any other unit of effort—is almost certain to go badly. If you put an engineer on the spot, without time to investigate and consider, you are very likely to get a conservative answer, partly designed to make you go away.
If, however, the engineers have been following along as the team has tried out these ideas with customers (using prototypes) and seen what the issues are and how people feel about these ideas, the engineers have probably already been considering the issues for some time.
The question isn't, “Can you do this?” Rather, you are asking them to look into it and answer the question, “What's the best way to do this and how long would it take?”
A user test is when we test our product ideas on real users and customers. It is a qualitative usability and value‐testing technique, and we let the user drive. The purpose is to test the usability and value of the prototype or product.
A product demo is when you sell your product to prospective users and customers, or evangelize your product across your company. This is a sales or persuasive tool. Product marketing usually creates a carefully scripted product demo, but the product manager will occasionally be asked to give the product demo—especially with high‐value customers or execs. In this case, the product manager does the driving. The purpose is to show off the value of the prototype or product.
A walkthrough is when you show your prototype to a stakeholder and you want to make sure they see and note absolutely everything that might be a concern. The purpose is to give the stakeholder every opportunity to spot a problem. The product manager usually drives, but if the stakeholder would like to play with the prototype we are happy to let them. You are not trying to sell them anything, you're not trying to test on them, and you're definitely not trying to hide anything from them.
They also rewrote the billing system to handle the monthly subscription model (a funny little side story is that they launched without this, as they had the 30‐day free trial month, which bought them the extra time they needed).
A discovery sprint is a one‐week time box of product discovery work, designed to tackle a substantial problem or risk your product team is facing.
After working with more than 100 product teams, and refining their methods as they learned what worked well and what didn't, the GV team decided to share their learnings in a book. The book is titled, Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days,

