More on this book
Community
Kindle Notes & Highlights
Read between
December 2, 2022 - March 19, 2023
Product teams are particularly susceptible to confirmation bias and the escalation of commitment. We tend to fall in love with our ideas. We often have to defend our ideas to stakeholders, further entrenching our commitment to our ideas. We tend to seek out why our ideas will work and forget to explore why they might not work. As a result, we are often overconfident about the success of our ideas.
Marty Cagan, author of INSPIRED, highlights that the best product teams complete a dozen or more discovery iterations every week. This pace is possible only when we step away from the concept of testing ideas and instead focus on testing the assumptions that need to be true in order for our ideas to succeed. By explicitly enumerating our assumptions, we can start to look for both confirming and disconfirming evidence to either support or refute each assumption.
Desirability assumptions: Does anyone want it? Will our customers get value from it? As we create solutions, we assume that our customers will want to use our solution, that they will be willing to do the things that we need them to do, and that they’ll trust us to provide those solutions. All of these types of assumptions fall into the desirability category.
Viability assumptions: Should we build it? There are many ideas that will work for our customers but won’t work for our business. If we want to continue to serve customers over time, we need to make sure that our solutions are viable—that they create a return for our business. This typically means that the idea will generate more revenue than it will cost to build, service, and maintain. However, some ideas are designed to be loss leaders and instead contribute to another business goal besides revenue.
Feasibility assumptions: Can we build it? We primarily think about feasibility as technical feasibility. Is it technically possible? Feasibility assumptions, however, can also include, “What’s feasible for our business?” For example, will our legal or security team allow for it? Will our culture support it? Does it comply with regulations?
Usability assumptions: Is it usable? Can customers find what they need? Will they understand how to use it or what they need to do? Are they able to do what we need them to do? Is it accessible?
Ethical assumptions: Is there any potential harm in building this idea? This is an area that is grossly underdeveloped for many product trios. As an industry, we need to do a better job of asking questions like: What data are we collecting? How are we storing it? How are we using it? If our customers had full transparency to those answers, would they be okay with it?
One of my favorite questions to ask teams is, “If the New York Times/Wall Street Journal/BBC (or insert your favorite news organization) ran a front-page story about this solution that included your internal conversations about how the solution would work, what data you collected, how you used it, and how different players in the ecosystem benefited or didn’t, would that be a good thing? If not, why not?” This is a great way to uncover ethical assumptions.
A strong assumption test simulates an experience, giving your participant the opportunity to behave either in accordance with your assumption or not. This behavior is what allows us to evaluate our assumption.
Now remember, you aren’t trying to prove that this assumption is true. The burden of truth is too much. You are simply trying to reduce risk. Keep your assumption map in mind. Your goal is to move the assumption from right to left. How many people would convince you this assumption is more known? That’s the negotiation you are having as a team.
With assumption testing, most of our learning comes from failed tests. That’s when we learn that something we thought was true might not be. Small tests give us a chance to fail sooner. Failing faster is what allows us to quickly move on to the next assumption, idea, or opportunity. Karl Popper, a renowned 20th-century philosopher of science, in the opening quote argues, “Good tests kill flawed theories,” preventing us from investing where there is little reward, and “we remain alive to guess again,” giving us another chance to get it right.
dirty research methods. However, product teams are not scientists. Scientists work to understand the fundamental laws of the universe. They are seeking truths, creating new knowledge. In science (and the rest of academia), truth is determined over decades. Research studies are designed and replicated by a community of scientists. Truth starts to emerge from a meta-analysis of years of research.
It’s exhilarating when our solutions start to work. It feels good when customers engage with what we build. But sadly, satisfying a customer need is not our only job. We need to remember that our goal is to satisfy customer needs while creating value for our business. We are constrained by driving our desired outcome. This is what allows us to create viable products, and viable products allow us to continue to serve our customers. So, when you find a compelling solution, remember to walk the lines of your opportunity solution tree. Desirability isn’t enough. Viability is the key to long-term
...more
Unfortunately, it’s not enough to drive product outcomes. The connection between our product outcome and our business outcome is a theory that needs to be tested. As you build a history of driving a product outcome, you need to remember to evaluate if driving the product outcome is, in turn, driving the business outcome.
When you frame the conversation in the solution space, you are framing the conversation to be about your opinion about what to build versus your stakeholders’ opinion about what to build. If your stakeholders are more senior to you, odds are their opinion is going to win. This is why we have the dreaded HiPPO acronym (the Highest Paid Person’s Opinions) and the saying “The HiPPO always wins.” Many product trios complain about the HiPPO but miss the role they play in creating this situation.
There’s a cognitive bias that is coming into play when we do this. It’s called the curse of knowledge.55 Once we know something (like we do in this situation, we have a wealth of discovery work that supports our point of view), it’s hard for us to remember what it was like not to have that knowledge. In fact, our conclusions—our roadmaps, our backlogs, our release plans—start to become obvious. We forget that not only are they not obvious to our stakeholders but also that they very likely have their own conclusions that seem obvious to them.
No matter how strong your discovery process is, there will still be times when your stakeholders swoop in and ask you to do things their way. If they are more senior to you in the corporate hierarchy, that’s their prerogative. What you can control is how you respond. I strongly recommend that you don’t turn the conversation into an ideological battle. In fact, if you ever catch yourself saying, “This is the way it’s supposed to be done,” take a deep breath, and walk away from the conversation. You aren’t going to win the ideological war in one conversation.
When you are asked to deliver a specific solution, work backward. Take the time to consider, “If our customers had this solution, what would it do for them?” If you are talking to customers regularly, ask them. Try to uncover the implied opportunity. Even if it’s a wild guess, starting to consider customer needs, pain points, and desires will help you deliver a better solution. You can apply the same question to your business to uncover the implied outcome, “If we shipped this feature, what value would it create for our business?” Refine your answer until you get to a clear metric—that’s your
...more