More on this book
Community
Kindle Notes & Highlights
A system is a set of things—people, cells, molecules, or whatever—interconnected in such a way that they produce their own pattern of behavior over time.
According to the competitive exclusion principle, if a reinforcing feedback loop rewards the winner of a competition with the means to win further competitions, the result will be the elimination of all but a few competitors.
A diverse system with multiple pathways and redundancies is more stable and less vulnerable to external shock than a uniform system with little diversity.
The behavior of a system cannot be known just by knowing the elements of which the system is made.
A system* is an interconnected set of elements that is coherently organized in a way that achieves something. If you look at that definition closely for a minute, you can see that a system must consist of three kinds of things: elements, interconnections, and a function or purpose.
Purposes are deduced from behavior, not from rhetoric or stated goals.
An important function of almost every system is to ensure its own perpetuation.
Keeping sub-purposes and overall system purposes in harmony is an essential function of successful systems.
A change in purpose changes a system profoundly, even if every element and interconnection remains the same.
A stock can be increased by decreasing its outflow rate as well as by increasing its inflow rate. There’s more than one way to fill a bathtub!
This behavior is an example of shifting dominance of feedback loops. Dominance is an important concept in systems thinking. When one loop dominates another, it has a stronger impact on behavior. Because systems often have several competing feedback loops operating simultaneously, those loops that dominate the system will determine the behavior.
Whenever you are confronted with a scenario (and you are, every time you hear about an economic prediction, a corporate budget, a weather forecast, future climate change, a stockbroker saying what is going to happen to a particular holding), there are questions you need to ask that will help you decide how good a representation of reality is the underlying model. • Are the driving factors likely to unfold this way? (What are birth rate and death rate likely to do?) • If they did, would the system react this way? (Do birth and death rates really cause the population stock to behave as we think
...more
The first question can’t be answered factually. It’s a guess about the future, and the future is inherently uncertain.
Dynamic systems studies usually are not designed to predict what will happen. Rather, they’re designed to explore what would happen, if a number of driving factors unfold in a range of different ways.
The second question—whether the system really will react this way—is more scientific. It’s a question about how good the model is. Does it capture the inherent dynamics of the system? Regardless of whether you think the driving factors will do that, would the system behave like that if they did?
Model utility depends not on whether its driving scenarios are realistic (since no one can know that for sure), but on whether it responds with a realistic pattern of behavior.
Everything we think we know about the world is a model. Every word and every language is a model. All maps and statistics, books and databases, equations and computer programs are models. So are the ways I picture the world in my head—my mental models. None of these is or ever will be the real world.
Everything we think we know about the world is a model. Our models do have a strong congruence with the world. Our models fall far short of representing the real world fully.
Like the tip of an iceberg rising above the water, events are the most visible aspect of a larger complex—but not always the most important.
When a systems thinker encounters a problem, the first thing he or she does is look for data, time graphs, the history of the system.
The company may hire salespeople, for example, who are so good that they generate orders faster than the factory can produce. Delivery delays increase and customers are lost, because production capacity is the most limiting factor. So the managers expand the capital stock of production plants. New people are hired in a hurry and trained too little. Quality suffers and customers are lost because labor skill is the most limiting factor. So management invests in worker training. Quality improves, new orders pour in, and the order-fulfillment and record-keeping system clogs. And so forth.
The ability to self-organize is the strongest form of system resilience. A system that can evolve can survive almost any change, by changing itself.
The higher the leverage point, the more the system will resist changing it—that’s why societies often rub out truly enlightened beings.

