More on this book
Community
Kindle Notes & Highlights
by
Clayton
Read between
March 12 - March 19, 2023
Theories built on categories of circumstances become easy for companies to employ, because managers live and work in ...
This highlight has been truncated due to consecutive passage length restrictions.
In our studies, we have observed that industry-based or product/ service-based categorization schemes almost never constitute a useful foundation for reliable theory. The Innovator’s Dilemma, for example, described how the same mechanism that enabled entrant companies to up-end the leading established firms in disk drives and computers also toppled the leading companies in mechanical excavators, steel, retailing, motorcycles, accounting software, and motor controls.
We can trust a theory only when its statement of what actions will lead to success describe how this will vary as a company’s circumstances change.
This is a major reason why the outcomes of innovation efforts have seemed quite random: Shoddy categorization has led to one-size-fits-all recommendations that in turn have led to the wrong results in many circumstances.
We often admire the intuition that successful entrepreneurs seem to have for building growth businesses. When they exercise their intuition about what actions will lead to the desired results, they really are employing theories that give them a sense of the right thing to do in various circumstances. These theories were not there at birth: They were learned through a set of experiences and mentors earlier in life.
We hope to help managers who are trying to create new-growth businesses use the best research we have been able to assemble to learn how to match their actions to the circumstances in order to get the results they need.
As our readers use these ways of thinking over and over, we hope that the thought processes inherent in these theories can become part of their intuition as well.
The Innovator’s Dilemma summarized a theory that explains how, under certain circumstances, the mechanism of profit-maximizing resource allocation causes well-run companies to get killed.
The Innovator’s Solution, in contrast, summarizes a set of theories that can guide managers who need to grow new businesses with predictable success—to become the disruptors rather than the disruptees—and ultimately kill the well-run, established competitors. To succeed predictably, disruptors must be good theorists. As they shape their growth business to be disruptive, they must align every critical process and decision to fit the disruptive circumstance.
On average over their long histories, of course, faster-growing firms yield higher returns. However, the faster-growing firm will have produced higher returns than the slower-growing firm only for investors in the past. If markets discount efficiently, then the investors who reap above-average returns are those who were fortunate enough to have bought shares in the past when the future growth rate had not been fully discounted into the price of the stock. Those who bought when the future growth potential already had been discounted into the share price would not receive an above-market
...more
First, when a company is growing, there are increased opportunities for employees to be promoted into new management positions that are opening up above them. Hence, the potential for growth in managerial responsibility and capability is much greater in a growing firm than in a stagnant one.
When growth slows, managers sense that their possibilities for advancement will be constrained not by their personal talent and performance, but rather by how many years must pass before the more senior managers above them will retire. When this happens, many of the most capable employees tend to leave the company, affecting the company’s abilities to regenerate growth.
As a result, growing firms typically have a technology edge over slow-growth competitors. But that advantage is not rooted so much in the visionary wisdom of the managers as it is in the difference in the circumstances of growth versus no growth.
Quinn suggests that the first step that corporate executives need to take in building new businesses is to “let a thousand flowers bloom,” then tend the most promising and let the rest wither. In this view, the key to successful innovation lies in choosing the right flowers to tend—and that decision must rely on complex intuitive feelings, calibrated by experience.
More recent work by Tom Peters (Thriving on Chaos: Handbook for a Management Revolution [New York: Knopf/Random House, 1987]) urges innovating managers to “fail fast”—to pursue new business ideas on a small scale and in a way that generates quick feedback about whether an idea is viable. Advocates of this approach urge corporate executives not to punish failures because it is only through repeated attempts that successful new businesses will emerge.
Evolutionary theory posits that whether a mutant organism thrives or dies depends on its fit with the “selection environment”—the conditions within which it must compete against other organisms for the resources required to thrive. Hence, believing that good and bad innovations pop up randomly, these researchers advise corporate executives to focus on creating a “selection environment” in which viable new business ideas are culled from the bad as quickly as possible.
They can be very helpful, given the present state of understanding, because if the processes that create innovations were indeed random, then a context within which managers could accelerate the creation and testing of ideas would indeed help. But if the process is not intrinsically random, as we assert, then addressing only the context is treating the symptom, not the source of the problem.
All of these approaches create an “infinite regress.” By bringing the market “inside,” we have simply backed up the problem: How can managers decide which ideas will be developed to the point at which they can be subjected to the selection pressures of their internal market? Bringing the market still deeper inside simply creates the same conundrum. Ultimately, innovators must judge what they will work on and how they will do it—and what they should consider when making those decisions is what is in the black box. The acceptance of randomness in innovation, then, is not a
...more
Robert Burgelman and Leonard Sayles, Inside Corporate Innovation (New York: Free Press, 1986);
Robert Burgelman, Strategy Is Destiny (New York: Free Press, 2002).
Clayton M. Christensen and Scott D. Anthony, “What’s the BIG Idea?” Case 9-602-105 (Boston: Har...
This highlight has been truncated due to consecutive passage length restrictions.
We have consciously chosen phrases such as “increase the probability of success” because business building is unlikely ever to become perfectly predictable, for at least three reasons. The first lies in the nature of competitive marketplaces. Companies whose ...
This highlight has been truncated due to consecutive passage length restrictions.
Every company therefore has an interest in behaving in deeply...
This highlight has been truncated due to consecutive passage length restrictions.
A second reason is the computational challenge associated with any system with a large nu...
This highlight has been truncated due to consecutive passage length restrictions.
A third reason is suggested by complexity theory, which holds that even fully determined systems that do not outstrip our computational abilities can still generate deeply random outcomes.
Assessing the extent to which the outcomes of innovation can be predicted, and the significance of any residual uncertainty or unpredictability, remains a profound theoretical challenge with important practical implications.
Many happenings in the natural world seemed very random and unfathomably complex to the ancients and to early scientists. Research that adhered carefully to the scientific method brought the predictability upon which so much progress has been built. Even when our most advanced theories have convinced scientists that the world is not deterministic, at least the phenomena are predictably random.
Peter Senge calls theories mental models (see Peter Senge, The Fifth Discipline [New York: Bantam Doubleday Dell, 1990]). We considered using the term model in this book, but opted instead to use the term theory. We have done this to be provocative, to inspire practitioners to value something that is indeed of value.
What we are saying is that the success of a theory should be measured by the accuracy with which it can predict outcomes across the entire range of situations in which managers find themselves. Consequently, we are not seeking “truth” in any absolute, Platonic sense; our standard is practicality and usefulness.
If we enable managers to achieve the results they seek, then we will have been successful.
Measuring the success of theories based on their usefulness is a respected tradition in the philosophy of science, articulated most full...
This highlight has been truncated due to consecutive passage length restrictions.
This is a serious deficiency of much management research. Econometricians call this practice “sampling on the dependent variable.” Many writers, and many who think of themselves as serious academics, are so eager to prove the worth of their theories that they studiously avoid the discovery of anomalies.
In more formal academic research, it is done by calling points of data that don’t fit the model “outliers” and finding a justification for excluding them from the statistical analysis. Both practices seriously limit the usefulness of what is written. It actually is the discovery of phenomena that the existing theory cannot explain that enables researchers to build better theory that is built upon a better classification scheme. We need to do anomaly-seeking research, not anomaly-avoiding research.
“Might you ever want to outsource something that is your core competence, and do internally something that is not your core competence?” Asking questions like this almost always improves the validity of the original theory. This opportunity to improve our understanding often exists even for very well done, highly regarded pieces of research.
an important conclusion in Jim Collins’s extraordinary book From Good to Great (New York: HarperBusiness, 2001) is that the executives of these successful companies weren’t charismatic, flashy men and women. They were humble people who respected the opinions of others. A good opportunity to extend the validity of Collins’s research is to ask a question such as, “Are there circumstances in which you actually don’t want a humble, noncharismatic CEO?” We suspect that there are—and defining the different circumstances in which charisma and humility are virtues and vices could do a great service to
...more
getting the categories right is the foundation for bringing predictability to an endeavor.
Unfortunately, many of those engaged in management research seem anxious not to spotlight instances their theory did not accurately predict. They engage in anomaly-avoiding, rather than anomaly-seeking, research and as a result contribute to the perpetuation of unpredictability. Hence, we lay much responsibility for the perceived unpredictability of business building at the feet of the very people whose business it is to study and write about these problems.
Although they name their key concept “grounded theory,” the book really is about categorization, because that process is so central to the building of valid theory. Their term “substantive theory” is similar to our term “attribute-based categories.” They describe how a knowledge-building community of researchers ultimately succeeds in transforming their understanding into “formal theory,” which we term “circumstance-based categories.”
Managers need to know if a theory applies in their situation, if they are to trust it. A very useful book on this topic is Robert K. Yin’s Case Study Research: Design and Methods (Beverly Hills, CA: Sage Publications, 1984). Building on Yin’s concept, we would say that the breadth of applicability of a theory, which Yin calls its external validity, is established by the soundness of its categorization scheme.
Applying any theory to industry after industry cannot prove its applicability because it will always leave managers wondering if there is something different about their current circumstances that renders the theory untrust-worthy. A theory can confidently be employed in prediction only when the categories that define its contingencies are clear.
Managers have long sought ways to predict the outcome of competitive fights. Some have looked at the attributes of the companies involved, predicting that larger companies with more resources to throw at a problem will beat the smaller competitors.
It’s interesting how often the CEOs of large, resource-rich companies base their strategies upon this theory, despite repeated evidence that the level of resources committed often bears little relationship to the outcome.
When innovations are incremental, the established, leading firms in an industry are likely to reinforce their dominance; however, compared with entrants, they will be conservative and ineffective in exploiting breakthrough innovation.
In sustaining circumstances—when the race entails making better products that can be sold for more money to attractive customers—we found that incumbents almost always prevail.
In disruptive circumstances—when the challenge is to commercialize a simpler, more convenient product that sells for less money and appeals to a new or unattractive customer set—the entrants are likely to beat the incumbents. This is the phenomenon that so frequently defeats successful companies. It implies, of course, that the best way for upstarts to attack established competitors is to disrupt them.
Few technologies or business ideas are intrinsically sustaining or disruptive in character. Rather, their disruptive impact must be molded into strategy as managers shape the idea into a plan and then implement it. Successful new-growth builders know—either intuitively or explicitly—t...
This highlight has been truncated due to consecutive passage length restrictions.
Second, in every market there is a distinctly different trajectory of improvement that innovating companies provide as they introduce new and improved products. This pace of technological progress almost always outstrips the ability of customers in any given tier of the market to use it, as the more steeply sloping solid lines in figure 2-1 suggest.
Thus, a company whose products are squarely positioned on mainstream customers’ current needs today will probably overshoot what those same customers are able to utilize in the future. This happens because companies keep striving to make better products that they can sell for higher profit margins to not-yet-satisfied customers in more demanding tiers of the market.
To visualize this, think back to 1983 when people first started using personal computers for word processing. Typists often had to stop their fingers to let the Intel 286 chip inside catch up. As depicted at the left side of figure 2-1, the technology was not good enough. But today’s processors offer much more speed than mainstream customers can use—although there are still...
This highlight has been truncated due to consecutive passage length restrictions.
The third critical element of the model is the distinction between sustaining and disruptive innovation. A sustaining innovation targets demanding, high-end customers with better performance than what was previously available. Some sustaining innovations are the incremental year-by-year improvements that all good companies grind out. Other sustaining innovations are breakthrough, leapfrog-beyond-the-competition products. It doesn’t matter how technologically difficult the innovation is, however: The established competitors almost always win the battles of sustaining technology. Because this
...more
This highlight has been truncated due to consecutive passage length restrictions.

