More on this book
Community
Kindle Notes & Highlights
by
Eric Ries
Read between
December 14, 2020 - March 11, 2021
Rather than lamenting them, use these advantages to experiment under the radar and then do a public marketing launch once the product has proved itself with real customers.11
If an MVP fails, teams are liable to give up hope and abandon the project altogether. But this is a solvable problem. FROM THE MVP TO INNOVATION ACCOUNTING The solution to this dilemma is a commitment to iteration.
Successful entrepreneurs do not give up at the first sign of trouble, nor do they persevere the plane right into the ground. Instead, they possess a unique combination of perseverance and flexibility.
At the beginning, a startup is little more than a model on a piece of paper.
A startup’s job is to (1) rigorously measure where it is right now, confronting the hard truths that assessment reveals, and then (2) devise experiments to learn how to move the real numbers closer to the ideal reflected in the business plan.
Most products—even the ones that fail—do not have zero traction. Most products have some customers, some growth, and some positive results.
Unfortunately, standard accounting is not helpful in evaluating entrepreneurs. Startups are too unpredictable for forecasts and milestones to be accurate.
I asked the team a simple question that I make a habit of asking startups whenever we meet: are you making your product better? They always say yes.
Then I ask: how do you know? I invariably get this answer: well, we are in engineering and we made a number of changes last month, and our customers seem to like them, and our overall numbers are higher this month. We must be on the right track.
Most milestones are built the same way: hit a certain product milestone, maybe talk to a few customers, and see if the numbers go up. Unfortunately, this is not a good indicator of whether a startup is making progress. How do we know that the changes we’ve made are related to the results we’re seeing? More important, how do we know that we are drawing the right lessons from those changes?
Innovation accounting enables startups to prove objectively that they are learning how to grow a sustainable business.
Innovation accounting begins by turning the leap-of-faith assumptions discussed in Chapter 5 into a quantitative financial model.
Innovation accounting works in three steps: first, use a minimum viable product to establish real data on where the company is right now.
Without a clear-eyed picture of your current status—no matter how far from the goal you may be—you cannot begin to track your progress.
Second, startups must attempt to tune the engine from the baseline toward the ideal. Th...
This highlight has been truncated due to consecutive passage length restrictions.
After the startup has made all the micro changes and product optimizations it can to move its baseline toward the ideal, the company reaches a decision point....
This highlight has been truncated due to consecutive passage length restrictions.
When a company pivots, it starts the process all over again, reestablishing a new baseline and then tuning the engine from there.
This is an old direct marketing technique in which customers are given the opportunity to preorder a product that has not yet been built. A smoke test measures only one thing: whether customers are interested in trying a product.
These MVPs provide the first example of a learning milestone. An MVP allows a startup to fill in real baseline data in its growth model—conversion rates, sign-up and trial rates, customer lifetime value, and so on—and
When one is choosing among the many assumptions in a business plan, it makes sense to test the riskiest assumptions first. If you can’t find a way to mitigate these risks toward the ideal that is required for a sustainable business, there is no point in testing the others.
Once the baseline has been established, the startup can work toward the second learning milestone: tuning the engine.
Every product development, marketing, or other initiative that a startup undertakes should be targeted at improving one of the drivers of its growth model.
This is an important rule: a good design is one that changes customer behavior for the better.
Compare two startups. The first company sets out with a clear baseline metric, a hypothesis about what will improve that metric, and a set of experiments designed to test that hypothesis.
The second team sits around debating what would improve the product, implements several of those changes at once, and celebrates if there is any positive increase in any of the numbers. Which startup is more likely...
This highlight has been truncated due to consecutive passage length restrictions.
Five dollars bought us a hundred clicks—every day. From a marketing point of view this was not very significant, but for learning it was priceless.
Every single day we were able to measure our product’s performance with a brand new set of customers. Also, each time we revised the product, we got a brand new report card on how we were doing the very next day.
Day in and day out we were performing random trials. Each day was a new experiment. Each day’s customers were independent of those of the day before.
To read the graph, you need to understand something called cohort analysis. This is one of the most important tools of startup analytics.
Instead of looking at cumulative totals or gross numbers such as total revenue and total number of customers, one looks at the performance of each group of customers that comes into contact with the product independently. Each group is called a cohort.
Customer flows govern the interaction of customers with a company’s products. They allow us to understand a business quantitatively and have much more predictive power than do traditional gross metrics.
Once our efforts were aligned with what customers really wanted, our experiments were much more likely to change their behavior for the better.
this is the sign of a successful pivot: the new experiments you run are overall more productive than the experiments you were running before.
Each time we repeat this simple rhythm: establish the baseline, tune the engine, and make a decision to pivot or persevere.
However, these tools for product improvement do not work the same way for startups. If you are building the wrong thing, optimizing the product or its marketing will not yield significant results.
I’ve been called in many times to help a startup that feels that its engineering team “isn’t working hard enough.”
Cycle after cycle, the team is working hard, but the business is not seeing results. Managers trained in a traditional model draw the logical conclusion: our team is not working hard, not working effectively, or not working efficiently.
Learning milestones prevent this negative spiral by emphasizing a more likely possibility: the company is executing—with discipline!—a plan that does not make sense.
The innovation accounting framework makes it clear when the company is stuck and needs to change direction.
None of its current initiatives were having any impact. But this was obscured because the company’s gross metrics were all “up and to the right.”
As we’ll see in a moment, this is a common danger. Companies of any size that have a working engine of growth can come to rely on the wrong kind of metrics to guide their actions.
Energy invested in success theater is energy that could have been used to help build a sustainable business.
This graph shows the traditional gross metrics for IMVU so far: total registered users and total paying customers (the gross revenue graph looks almost the same). From this viewpoint, things look much more exciting. That’s why I call these vanity metrics: they give the rosiest possible picture.
Innovation accounting will not work if a startup is being misled by these kinds of vanity metrics: gross number of customers and so on.
The alternative is the kind of metrics we use to judge our business and our learning milestones, what I call actionable metrics.
How do you know that the prioritization decisions that Farb is making actually make sense?
However, because Grockit was using the wrong kinds of metrics, the startup was not genuinely improving.
Compared to a lot of startups, the Grockit team had a huge advantage: they were tremendously disciplined.
A disciplined team may apply the wrong methodology but can shift gears quickly once it discovers its error.
Grockit changed the metrics they used to evaluate success in two ways. Instead of looking at gross metrics, Grockit switched to cohort-based metrics, and instead of looking for cause-and-effect relationships after the fact, Grockit would launch each new feature as a true split-test experiment.

