More on this book
Community
Kindle Notes & Highlights
by
Eric Ries
If the company is making good progress toward the ideal, that means it’s learning appropriately and using that learning effectively,
in which case it makes sense ...
This highlight has been truncated due to consecutive passage length restrictions.
When a company pivots, it starts the process all over again, reestablishing a new baseline and then tuning the engine from there.
Once the baseline has been established, the startup can work toward the second learning milestone: tuning the engine. Every product development, marketing, or other initiative that a startup undertakes should be targeted at improving one of the drivers of its growth model.
This is an important rule: a good design is one that changes customer behavior for the better.
fact: if we’re not moving the drivers of our business model, we’re not making progress. That becomes a sure sign that it’s time to pivot.
Once our efforts were aligned with what customers really wanted, our experiments were much more likely to change their behavior for the better.
this is the sign of a successful pivot: the new experiments you run are overall more productive than the experiments you were running before.
poor quantitative results force us to declare failure and create the motivation, context, and space for more qualitative research.
the cycle repeats. Each time we repeat this simple rhythm: establish the baseline, tune the engine, and make a decision to pivot or persevere.
these tools for product improvement do not work the same way for startups. If you are building the wrong thing, optimizing the product or its marketing will not yield significant results.
A startup has to measure progress against a high bar: evidence that a sustainable business can be built around its products or services.
Even worse, the team had no clear sense of whether any of the changes they were making mattered to customers.
Learning milestones prevent this negative spiral by emphasizing a more likely possibility: the company is executing—with discipline!—a plan that does not make sense.
unasked and unanswered were other lurking questions: Did the company have a working engine of growth? Was this early success related to the daily work of the product development team?
None of its current initiatives were having any impact. But this was obscured because the company’s gross metrics were all “up and to the right.”
Companies of any size that have a working engine of growth can come to rely on the wrong kind of me...
This highlight has been truncated due to consecutive passage length restrictions.
call the traditional numbers used to judge startups “vanity metrics,” and innovation accounting requires us to avoid the temptation to use them.
Grockit is an excellent case study because its problems were not a matter of failure of execution or discipline.
cohort-based metrics, and instead of looking for cause-and-effect relationships after the fact, Grockit would launch each new feature as a true split-test experiment.
A split-test experiment is one in which different versions of a product are offered to customers at the same time. By observing the changes in behavior between the two groups, one can make inferences about the impact of the different variations.
kanban, or capacity constraint,
user stories were not considered complete until they led to validated learning. Thus, stories could be cataloged as being in one of four states of development: in the product backlog, actively being built, done (feature complete from a technical point of view), or in the process of being validated.
The kanban rule permitted only so many stories in each of the four states.
The only way to start work on new features is to investigate some of the stories that are done but haven’t been validated. That often requires nonengineering efforts: talking to customers, looking at split-test data, and the like.
As engineers look for ways to increase their productivity, they start to realize that if they include the validation exercise from the beginning, the whole team can be more productive.
Grockit had undertaken this extra effort because lazy registration was considered an industry best practice.
I encouraged the team to try a simple split-test. They took one cohort of customers and required that they register immediately, based on nothing more than Grockit’s
marketing materials. To their surprise, this cohort’s behavior was exactly the same as that of the lazy registration group: they had the same rate of registration, activation, and subsequent retention. In other words, the extra effort of lazy registration was a c...
This highlight has been truncated due to consecutive passage length restrictions.
this test suggested: customers were basing their decision about Grockit on something other th...
This highlight has been truncated due to consecutive passage length restrictions.
This suggested that improving Grockit’s positioning and marketing might have a
more significant impact on attracting new customers than would adding new features.
For a report to be considered actionable, it must demonstrate clear cause and effect.
When cause and effect is clearly understood, people are better able to learn from their actions.
First, make the reports as simple as possible so that everyone understands them.
easiest way to make reports comprehensible is to use tangible, concrete units.
In other words, the report deals with people and their actions, which are far more useful than piles of data points.
Accessibility also refers to widespread access to the reports.
The reports were available on our website, accessible to anyone with an employee account.
see a simple one-page summary of the results.
those one-page summaries became the de facto standard for settling product arguments throughout the organization.
We must ensure that the data is credible to employees.
We need to be able to test the data by hand, in the messy real world, by talking to customers.
Managers need the ability to spot check the data with real customers.
those building reports must make sure the mechanisms that generate the reports are not too complex.
reports should be drawn directly from the master data,
The other 95 percent is the gritty work that is measured by innovation accounting: product prioritization decisions, deciding which customers to target or listen to, and having the courage to subject a grand vision to constant testing and feedback.
We all must face this fundamental test: deciding when to pivot and when to persevere.
are we making sufficient progress to believe that our original strategic hypothesis is correct, or do we need to make a major change? That change is called a pivot: a structured course correction designed to test a new fundamental hypothesis about the product, strategy, and engine of growth.