More on this book
Community
Kindle Notes & Highlights
by
Marty Cagan
Read between
July 23 - August 12, 2018
Note also that the vision is not in any sense a spec. It's mainly a persuasive piece that might be in the form of a storyboard, a narrative such as a white paper, or a special type of prototype referred to as a visiontype.
It is not very hard to identify the important trends. What's hard is to help the organization understand how those trends can be leveraged by your products to solve customer problems in new and better ways.
There is no such thing as over‐communicating when it comes to explaining and selling the vision. Especially in larger organizations, there is simply no escaping the need for near‐constant evangelization. You'll find that people in all corners of the company will at random times get nervous or scared about something they see or hear. Quickly reassure them before their fear infects others.
Where the product vision describes the future you want to create, and the product strategy describes your path to achieving that vision, the product principles speak to the nature of the products you want to create.
If different functional organizations (such as design, engineering, or quality assurance) have larger objectives (such as responsive design, technical debt, and test automation), they should be discussed and prioritized at the leadership team level along with the other business objectives, and then incorporated into the relevant product team's objectives.
single solution that works for many customers, and not a series of specials. To do this, we need to be able to test out many ideas, and we need to do this quickly and inexpensively.
It's understandable that many people might naturally view these two difficult goals as at odds with each other. We are in a big hurry to push something out to learn what works and what doesn't. Yet, we don't want to release something that's not ready for prime time and risk hurting our customers and damaging our brand.
The purpose of product discovery is to address these critical risks: Will the customer buy this, or choose to use it? (Value risk) Can the user figure out how to use it? (Usability risk) Can we build it? (Feasibility risk) Does this solution work for our business? (Business viability risk)
Customers don't know what's possible, and with technology products, none of us know what we really want until we actually see it.
We need to validate the feasibility of our ideas during discovery, not after. If the first time your developers see an idea is at sprint planning, you have failed. We need to ensure the feasibility before we decide to build, not after.
The first is to ensure the team is all on the same page in terms of clarity of purpose and alignment. In particular, we need to agree on the business objective we're focused on, the specific problem we are intending to solve for our customers,
The second purpose is to identify the big risks that will need to be tackled during the discovery work.
We must also consider value risk—do the customers want this particular problem solved and is our proposed solution good enough to get people to switch from what they have now?
But one of the most important lessons in our industry is to fall in love with the problem, not the solution.
In this section, I describe two of my favorite discovery‐planning techniques. One is simple (story maps), and the other is fairly complicated (customer discovery program), but they are both remarkably powerful and effective.
Another must‐read book for product managers: User Story Mapping: Discover the Whole Story, Build the Right Product, by Jeff Patton (O'Reilly Media, 2014).
The reason I love the customer discovery program technique so much is because it is designed to produce these reference customers.
But I will also say that if you do this technique, I consider it the single best leading indicator of future product success.
There are four main variations of this technique for four different situations: Building products for businesses Building platform products (e.g., public APIs) Building customer‐enabling tools used by employees of your company Building products for consumers
The basic driver behind this technique is that, with a significant new product, the most common objection is that prospective customers want to see that other companies, like themselves, are already successfully using the product.
“How do we generate the types of ideas that are likely to truly help us solve the hard business problems that our leaders have asked us to focus on right now?”
Are your customers who you think they are? Do they really have the problems you think they have? How does the customer solve this problem today? What would be required for them to switch?
It's critical to learn the answers to these key questions. However, I am a big fan of taking the opportunity of a customer interview to also try out some of our product ideas. We do that after we've learned the answers to these key questions,
A concierge test is a relatively new name to describe an old but effective technique. The idea is that we do the customer's job for them—manually and personally.
Historically, the two main approaches used by good teams to come up with product opportunities have been: Try to assess the market opportunities and pick potentially lucrative areas where significant pain exists. Look at what the technology or data enables—what's just now possible—and match that up with the significant pain.
This third alternative is to allow, and even encourage, our customers to use our products to solve problems other than what we planned for and officially support.
If you find your customers using your product in ways you didn't predict, this is potentially very valuable information. Dig in a little and learn what problem they are trying to solve and why they believe your product might provide the right foundation.
The two main types of hack days are directed and undirected.
The goal is for the self‐organizing groups to explore their ideas and create some form of prototype that can be evaluated, and if appropriate, tested on actual users.
The second benefit is cultural. This is one of my favorite techniques for building a team of missionaries rather than mercenaries. The engineers, if they haven't already, are now diving much deeper into the business context and playing a much larger role in terms of innovation.
But there are in fact many very different forms of prototypes, each with different characteristics and each suited to testing different things. And, yes, some people get themselves into trouble trying to use the wrong type of prototype for the job at hand.
Feasibility Prototypes These are written by engineers to address technical feasibility risks during product discovery—before we decide whether something is feasible.
User prototypes are simulations.
The main purpose of a live‐data prototype is to collect actual data so we can prove something, or at least gather some evidence
key benefits of any form of prototype is to force you to think through a problem at a substantially deeper level than if we just talk about it or write something down.
Members of the product team and business partners can all experience the prototype to develop shared understanding.
The principle is that we create the right level of fidelity for its intended purpose, and we acknowledge that lower fidelity is faster and cheaper than higher fidelity, so we only do higher fidelity when we need to.
Whenever you hear stories of product teams that grossly underestimated the amount of work required to build and deliver something, this is usually the underlying reason.
A low‐fidelity user prototype doesn't look real—it is essentially an interactive wireframe. Many teams use these as a way to think through the product among themselves,
The big limitation of a user prototype is that it's not good for proving anything—like whether or not your product will sell.
The live‐data prototype is substantially smaller than the eventual product, and the bar is dramatically lower in terms of quality, performance, and functionality. It needs to run well enough to collect data for some very specific use cases, and that's about it.
So, if the live‐data tests go well, and you decide to move forward and productize, you will need to allow your engineers to take the time required to do the necessary delivery work. It is definitely not okay for the product manager to tell the engineers that this is “good enough.”
What's important is that actual users will use the live‐data prototype for real work, and this will generate real data (analytics) that we can compare to our current product—or to our expectations—to see if this new approach performs better.
First, we will usually assess value. This is often the toughest—and most important—question to answer, and if the value isn't there, not much else matters.
When you first start the actual usability test, make sure to tell your subject that this is just a prototype, it's a very early product idea, and it's not real. Explain that she won't be hurting your feelings by giving her candid feedback, good or bad. You're testing the ideas in the prototype, you're not testing her. She can't pass or fail—only the prototype can pass or fail.
What matters is whether users can easily do the tasks they need to do. It really doesn't matter if the user thinks something on the page is ugly or should be moved or changed.
During the testing, the main skill you have to learn is to keep quiet.
There are three important cases you're looking for: (1) the user got through the task with no problem at all and no help; (2) the user struggled and moaned a bit, but he eventually got through it; or (3) he got so frustrated he gave up.
Act like a parrot. This helps in many ways. First, it helps avoid leading. If they're quiet and you really can't stand it because you're uncomfortable, tell them what they're doing:
but as soon as you think you've identified an issue, just fix it in the prototype. There's no law that says you have to keep the test identical for all of your test subjects. That kind of thinking stems from misunderstanding the role this type of qualitative testing plays. We're not trying to prove anything here; we're just trying to learn quickly.