The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses
Rate it:
Open Preview
43%
Flag icon
A split-test experiment is one in which different versions of a product are offered to customers at the same time. By observing the changes in behavior between the two groups, one can make inferences about the impact of the different variations.
43%
Flag icon
Split testing often uncovers surprising things. For example, many features that make the product better in the eyes of engineers and designers have no impact on customer behavior.
43%
Flag icon
Split testing also helps teams refine their understanding of what customers want and don’t want.
44%
Flag icon
Under the new system, user stories were not considered complete until they led to validated learning.
44%
Flag icon
Thus, stories could be cataloged as being in one of four states of development: in the product backlog, actively being built, done (feature complete from a technical point of view), or in the process of being validated.
44%
Flag icon
Validated was defined as “knowing whether the story was a good idea to have been ...
This highlight has been truncated due to consecutive passage length restrictions.
44%
Flag icon
This validation usually would come in the form of a split test showing a change in customer behavior but also might inclu...
This highlight has been truncated due to consecutive passage length restrictions.
44%
Flag icon
If the validation fails and it turns out the story is a bad idea, the relevant feature is removed from the product (see the chart on this page).
44%
Flag icon
I have implemented this system with several teams, and the initial result is always frustrating: each bucket fills up, starting with the “validated” bucket and moving on to the “done” bucket, until it’s not possible to start any more work.
44%
Flag icon
Teams that are used to measuring their productivity narrowly, by the number of stories they are delivering, feel stuck.
44%
Flag icon
Pretty soon everyone gets the hang of it. This progress occurs in fits and starts at first. Engineering may finish a big batch of work, followed by extensive testing and validation. As engineers look for ways to increase their productivity, they start to realize that if they include the validation exercise from the beginning, the whole team can be more productive.
44%
Flag icon
The same logic applies to a story that an engineer doesn’t understand. Under the old system, he or she would just build it and find out later what it was for. In the new system, that behavior is clearly counterproductive: without a clear hypothesis, how can a story ever be validated?
44%
Flag icon
Most important, teams working in this system begin to measure their productivity according to validated learning, not in terms of the production of new features.
45%
Flag icon
They took one cohort of customers and required that they register immediately, based on nothing more than Grockit’s marketing materials. To their surprise, this cohort’s behavior was exactly the same as that of the lazy registration group: they had the same rate of registration, activation, and subsequent retention. In other words, the extra effort of lazy registration was a complete waste even though it was considered an industry best practice.
45%
Flag icon
Even more important than reducing waste was the insight that this test suggested: customers were basing their decision about Grockit on something other than their use of the product.
45%
Flag icon
For a report to be considered actionable, it must demonstrate clear cause and effect. Otherwise, it is a vanity metric.
45%
Flag icon
Vanity metrics wreak havoc because they prey on a weakness of the human mind. In my experience, when the numbers go up, people think the improvement was caused by their actions, by whatever they were working on at the time.
45%
Flag icon
Unfortunately, when the numbers go down, it results in a very different reaction: now it’s somebody else’s fault.
45%
Flag icon
Thus, most team members or departments live in a world where their department is constantly making things better, only to have their hard work sabotaged by other departments that just don’t get it.
45%
Flag icon
Actionable metrics are the antidote to this problem. When cause and effect is clearly understood, people are better able to learn from their actions.
46%
Flag icon
Departments too often spend their energy learning how to use data to get what they want rather than as genuine feedback to guide their future actions.
46%
Flag icon
There is an antidote to this misuse of data. First, make the reports as simple as possible so that everyone understands them.
46%
Flag icon
The easiest way to make reports comprehensible is to use tangi...
This highlight has been truncated due to consecutive passage length restrictions.
46%
Flag icon
This is why cohort-based reports are the gold standard of learning metrics: they turn complex actions into people-based reports.
46%
Flag icon
It is hard to visualize what it means if the number of website hits goes down from 250,000 in one month to 200,000 the next month, but most people understand immediately what it means to lose 50,000 customers.
46%
Flag icon
Accessibility also refers to widespread access to the reports.
46%
Flag icon
The reports were well laid out and easy to read, with each experiment and its results explained in plain English.
46%
Flag icon
Each employee could log in to the system at any time, choose from a list of all current and past experiments, and see a simple one-page summary of the results.
46%
Flag icon
Over time, those one-page summaries became the de facto standard for settling product arguments throughout the organization.
46%
Flag icon
When informed that their pet project is a failure, most of us are tempted to blame the messenger, the data, the manager, the gods, or anything else we can think of.
46%
Flag icon
That’s why the third A of good metrics, “auditable,” is so essential. We must ensure that the data is credible to employees.
46%
Flag icon
More often, the lack of such supporting documentation is simply a matter of neglect.
46%
Flag icon
The solution? First, remember that “Metrics are people, too.” We need to be able to test the data by hand, in the messy real world, by talking to customers.
46%
Flag icon
Managers need the ability to spot check the data with real customers.
46%
Flag icon
It also has a second benefit: systems that provide this level of auditability give managers and entrepreneurs the opportunity to gain insights into why custo...
This highlight has been truncated due to consecutive passage length restrictions.
46%
Flag icon
Second, those building reports must make sure the mechanisms that generate the rep...
This highlight has been truncated due to consecutive passage length restrictions.
47%
Flag icon
I have noticed that every time a team has one of its judgments or assumptions overturned as a result of a technical problem with the data, its confidence, morale, and discipline are undermined.
47%
Flag icon
Only 5 percent of entrepreneurship is the big idea, the business model, the whiteboard strategizing, and the splitting up of the spoils.
47%
Flag icon
The other 95 percent is the gritty work that is measured by innovation accounting: product prioritization decisions, deciding which customers to target or listen to, and having the courage to subject a grand vision to constant testing and feedback.
47%
Flag icon
We all must face this fundamental test: deciding when to pivot and when to persevere.
47%
Flag icon
Every entrepreneur eventually faces an overriding challenge in developing a successful product: deciding when to pivot and when to persevere.
47%
Flag icon
There is no way to remove the human element—vision, intuition, judgment—from the practice of entrepreneurship, nor would that be desirable.
48%
Flag icon
David faced the difficult challenge of deciding whether to pivot or persevere. This is one of the hardest decisions entrepreneurs face.
49%
Flag icon
David was still stuck in an age-old entrepreneurial trap. His metrics and product were improving, but not fast enough.
51%
Flag icon
Seasoned entrepreneurs often speak of the runway that their startup has left: the amount of time remaining in which a startup must either achieve lift-off or fail.
51%
Flag icon
First, vanity metrics can allow entrepreneurs to form false conclusions and live in their own private reality. This is particularly damaging to the decision to pivot because it robs teams of the belief that it is necessary to change.
51%
Flag icon
Second, when an entrepreneur has an unclear hypothesis, it’s almost impossible to experience complete failure, and without failure there is usually no impetus to embark on the radical change a pivot requires.
51%
Flag icon
Third, many entrepreneurs are afraid. Acknowledging failure can lead to dangerously low morale. Most entrepreneurs’ biggest fear is not that their vision will prove to be wrong. More terrifying is the thought that the vision might be deemed wrong without having been given a real chance to prove itself.
53%
Flag icon
The team worked valiantly to find ways to improve the product, but none showed any particular promise. It was time for a pivot or persevere meeting.
54%
Flag icon
This is also common with pivots; it is not necessary to throw out everything that came before and start over.