More on this book
Community
Kindle Notes & Highlights
by
Eric Ries
Read between
December 14, 2020 - March 11, 2021
A split-test experiment is one in which different versions of a product are offered to customers at the same time. By observing the changes in behavior between the two groups, one can make inferences about the impact of the different variations.
Split testing often uncovers surprising things. For example, many features that make the product better in the eyes of engineers and designers have no impact on customer behavior.
Split testing also helps teams refine their understanding of what customers want and don’t want.
Under the new system, user stories were not considered complete until they led to validated learning.
Thus, stories could be cataloged as being in one of four states of development: in the product backlog, actively being built, done (feature complete from a technical point of view), or in the process of being validated.
Validated was defined as “knowing whether the story was a good idea to have been ...
This highlight has been truncated due to consecutive passage length restrictions.
This validation usually would come in the form of a split test showing a change in customer behavior but also might inclu...
This highlight has been truncated due to consecutive passage length restrictions.
If the validation fails and it turns out the story is a bad idea, the relevant feature is removed from the product (see the chart on this page).
I have implemented this system with several teams, and the initial result is always frustrating: each bucket fills up, starting with the “validated” bucket and moving on to the “done” bucket, until it’s not possible to start any more work.
Teams that are used to measuring their productivity narrowly, by the number of stories they are delivering, feel stuck.
Pretty soon everyone gets the hang of it. This progress occurs in fits and starts at first. Engineering may finish a big batch of work, followed by extensive testing and validation. As engineers look for ways to increase their productivity, they start to realize that if they include the validation exercise from the beginning, the whole team can be more productive.
The same logic applies to a story that an engineer doesn’t understand. Under the old system, he or she would just build it and find out later what it was for. In the new system, that behavior is clearly counterproductive: without a clear hypothesis, how can a story ever be validated?
Most important, teams working in this system begin to measure their productivity according to validated learning, not in terms of the production of new features.
They took one cohort of customers and required that they register immediately, based on nothing more than Grockit’s marketing materials. To their surprise, this cohort’s behavior was exactly the same as that of the lazy registration group: they had the same rate of registration, activation, and subsequent retention. In other words, the extra effort of lazy registration was a complete waste even though it was considered an industry best practice.
Even more important than reducing waste was the insight that this test suggested: customers were basing their decision about Grockit on something other than their use of the product.
For a report to be considered actionable, it must demonstrate clear cause and effect. Otherwise, it is a vanity metric.
Vanity metrics wreak havoc because they prey on a weakness of the human mind. In my experience, when the numbers go up, people think the improvement was caused by their actions, by whatever they were working on at the time.
Unfortunately, when the numbers go down, it results in a very different reaction: now it’s somebody else’s fault.
Thus, most team members or departments live in a world where their department is constantly making things better, only to have their hard work sabotaged by other departments that just don’t get it.
Actionable metrics are the antidote to this problem. When cause and effect is clearly understood, people are better able to learn from their actions.
Departments too often spend their energy learning how to use data to get what they want rather than as genuine feedback to guide their future actions.
There is an antidote to this misuse of data. First, make the reports as simple as possible so that everyone understands them.
The easiest way to make reports comprehensible is to use tangi...
This highlight has been truncated due to consecutive passage length restrictions.
This is why cohort-based reports are the gold standard of learning metrics: they turn complex actions into people-based reports.
It is hard to visualize what it means if the number of website hits goes down from 250,000 in one month to 200,000 the next month, but most people understand immediately what it means to lose 50,000 customers.
Accessibility also refers to widespread access to the reports.
The reports were well laid out and easy to read, with each experiment and its results explained in plain English.
Each employee could log in to the system at any time, choose from a list of all current and past experiments, and see a simple one-page summary of the results.
Over time, those one-page summaries became the de facto standard for settling product arguments throughout the organization.
When informed that their pet project is a failure, most of us are tempted to blame the messenger, the data, the manager, the gods, or anything else we can think of.
That’s why the third A of good metrics, “auditable,” is so essential. We must ensure that the data is credible to employees.
More often, the lack of such supporting documentation is simply a matter of neglect.
The solution? First, remember that “Metrics are people, too.” We need to be able to test the data by hand, in the messy real world, by talking to customers.
Managers need the ability to spot check the data with real customers.
It also has a second benefit: systems that provide this level of auditability give managers and entrepreneurs the opportunity to gain insights into why custo...
This highlight has been truncated due to consecutive passage length restrictions.
Second, those building reports must make sure the mechanisms that generate the rep...
This highlight has been truncated due to consecutive passage length restrictions.
I have noticed that every time a team has one of its judgments or assumptions overturned as a result of a technical problem with the data, its confidence, morale, and discipline are undermined.
Only 5 percent of entrepreneurship is the big idea, the business model, the whiteboard strategizing, and the splitting up of the spoils.
The other 95 percent is the gritty work that is measured by innovation accounting: product prioritization decisions, deciding which customers to target or listen to, and having the courage to subject a grand vision to constant testing and feedback.
We all must face this fundamental test: deciding when to pivot and when to persevere.
Every entrepreneur eventually faces an overriding challenge in developing a successful product: deciding when to pivot and when to persevere.
There is no way to remove the human element—vision, intuition, judgment—from the practice of entrepreneurship, nor would that be desirable.
David faced the difficult challenge of deciding whether to pivot or persevere. This is one of the hardest decisions entrepreneurs face.
David was still stuck in an age-old entrepreneurial trap. His metrics and product were improving, but not fast enough.
Seasoned entrepreneurs often speak of the runway that their startup has left: the amount of time remaining in which a startup must either achieve lift-off or fail.
First, vanity metrics can allow entrepreneurs to form false conclusions and live in their own private reality. This is particularly damaging to the decision to pivot because it robs teams of the belief that it is necessary to change.
Second, when an entrepreneur has an unclear hypothesis, it’s almost impossible to experience complete failure, and without failure there is usually no impetus to embark on the radical change a pivot requires.
Third, many entrepreneurs are afraid. Acknowledging failure can lead to dangerously low morale. Most entrepreneurs’ biggest fear is not that their vision will prove to be wrong. More terrifying is the thought that the vision might be deemed wrong without having been given a real chance to prove itself.
The team worked valiantly to find ways to improve the product, but none showed any particular promise. It was time for a pivot or persevere meeting.
This is also common with pivots; it is not necessary to throw out everything that came before and start over.

