More on this book
Community
Kindle Notes & Highlights
New discoveries, however, suggest that just a few simple organizing principles can lead to wildly diverse self-organizing structures.
(It is because of fractal geometry that the average human lung has enough surface area to cover a tennis court.)
Systems often have the property of self-organization—the ability to structure themselves, to create new structure, to learn, diversify, and complexify. Even complex forms of self-organization may arise from relatively simple organizing rules—or may not.
In the process of creating new structures and increasing complexity, one thing that a self-organizing system often generates is hierarchy.
If subsystems can largely take care of themselves, regulate themselves, maintain themselves, and yet serve the needs of the larger system, while the larger system coordinates and enhances the functioning of the subsystems, a stable, resilient, and efficient structure results.
Complex systems can evolve from simple systems only if there are stable intermediate forms. The resulting complex forms will naturally be hierarchic.
Hierarchies are brilliant systems inventions, not only because they give a system stability and resilience, but also because they reduce the amount of information that any part of the system has to keep track of.
In hierarchical systems relationships within each subsystem are denser and stronger than relationships between subsystems.
If these differential information links within and between each level of the hierarchy are designed right, feedback delays are minimized. No level is overwhelmed with information.
Hierarchical systems are partially decomposable. They can be taken apart and the subsystems with their especially dense information links can function, at least partially, as systems in their own right. When hierarchies break down, they usually split along their subsystem boundaries.
What you need to think about may change over time, as self-organizing systems evolve new degrees of hierarchy and integration.
Hierarchies evolve from the lowest level up—from the pieces to the whole, from cell to organ to organism, from individual to team, from actual production to management of production.
The original purpose of a hierarchy is always to help its originating subsystems do their jobs better. This is something, unfortunately, that both the higher and the lower levels of a greatly articulated hierarchy easily can forget.
When a subsystem’s goals dominate at the expense of the total system’s goals, the resulting behavior is called suboptimization.
To be a highly functional system, hierarchy must balance the welfare, freedoms, and responsibilities of the subsystems and total system—there must be enough central control to achieve coordination toward the large-system goal, and enough autonomy to keep all subsystems flourishing, functioning, and self-organizing.
We often draw illogical conclusions from accurate assumptions, or logical conclusions from inaccurate assumptions. Most of us, for instance, are surprised by the amount of growth an exponential process can generate. Few of us can intuit how to damp oscillations in a complex system.
Everything we think we know about the world is a model. Our models do have a strong congruence with the world. Our models fall far short of representing the real world fully.
Systems fool us by presenting themselves—or we fool ourselves by seeing the world—as a series of events. The daily news tells of elections, battles, political agreements, disasters, stock market booms or busts.
It’s endlessly engrossing to take in the world as a series of events, and constantly surprising, because that way of seeing the world has almost no predictive or explanatory value.
We are less likely to be surprised if we can see how events accumulate into dynamic patterns of behavior.
The behavior of a system is its performance over time—its growth, stagnation, decline, oscillation, randomness, or evolution.
When a systems thinker encounters a problem, the first thing he or she does is look for data, time graphs, the history of the system. That’s because long-term behavior provides clues to the underlying system structure.
System structure is the source of system behavior. System behavior reveals itself as a series of events over time.
Economists follow the behavior of flows, because that’s where the interesting variations and most rapid changes in systems show up.
But without seeing how stocks affect their related flows through feedback processes, one cannot understand the dynamics of economic systems or the reasons for their behavior.
There’s no reason to expect any flow to bear a stable relationship to any other flow. Flows go up and down, on and off, in all sorts of combinations, in response to stocks, not to other flows.
A nonlinear relationship is one in which the cause does not produce a proportional effect. The relationship between cause and effect can only be drawn with curves or wiggles, not with a straight line.
As the flow of traffic on a highway increases, car speed is affected only slightly over a large range of car density. Eventually, however, small further increases in density produce a rapid drop-off in speed.
Nonlinearities are important not only because they confound our expectations about the relationship between action and response. They are even more important because they change the relative strengths of feedback loops. They can flip a system from one mode of behavior to another.
Side-effects no more deserve the adjective “side” than does the “principal” effect. It is hard to think in terms of systems, and we eagerly warp our language to protect ourselves from the necessity of doing so.
Clouds stand for the beginnings and ends of flows. They are stocks—sources and sinks—that are being ignored at the moment for the purposes of simplifying the present discussion.
Landfills fill up with a suddenness that has been surprising for people whose mental models picture garbage as going “away,” into some sort of a cloud. Sources of raw materials—mines, wells, and oil fields—can be exhausted with surprising suddenness too.
There is no single, legitimate boundary to draw around a system. We have to invent boundaries for clarity and sanity; and boundaries can produce problems when we forget that we’ve artificially created them.
The right boundary for thinking about a problem rarely coincides with the boundary of an academic discipline, or with a political boundary.
We like to think about one or at most a few things at a time. And we don’t like, especially when our own plans and desires are involved, to think about limits.
At any given time, the input that is most important to a system is the one that is most limiting.
Economics evolved in a time when labor and capital were the most common limiting factors to production. Therefore, most economic production functions keep track only of these two factors (and sometimes technology).
Insight comes not only from recognizing which factor is limiting, but from seeing that growth itself depletes or enhances limits and therefore changes what is limiting.
Whenever one factor ceases to be limiting, growth occurs, and the growth itself changes the relative scarcity of factors until another becomes limiting.
For any physical entity in a finite environment, perpetual growth is impossible. Ultimately, the choice is not to grow forever but to decide what limits to live within.
There always will be limits to growth. They can be self-imposed. If they aren’t, they will be system-imposed.
I realize with fright that my impatience for the re-establishment of democracy had something almost communist in it; or, more generally, something rationalist. I had wanted to make history move ahead in the same way that a child pulls on a plant to make it grow more quickly.
Jay Forrester used to tell us, when we were modeling a construction or processing delay, to ask everyone in the system how long they thought the delay was, make our best guess, and then multiply by three.
Delays are ubiquitous in systems. Every stock is a delay. Most flows have delays—shipping delays, perception delays, processing delays, maturation delays.
The world peeps, squawks, bangs, and thunders at many frequencies all at once. What is a significant delay depends—usually—on which set of frequencies you’re trying to understand.
When there are long delays in feedback loops, some sort of foresight is essential. To act only when a problem becomes obvious is to miss an important opportunity to solve the problem.
By pursuing his own interest he frequently promotes that of society more effectually than when he really intends to promote it. —Adam Smith,9 18th century political economist
Unfortunately, the world presents us with multiple examples of people acting rationally in their short-term best interests and producing aggregate results that no one likes.
Bounded rationality means that people make quite reasonable decisions based on the information they have. But they don’t have perfect information, especially about more distant parts of the system.
We are not omniscient, rational optimizers, says Simon. Rather, we are blundering “satisficers,” attempting to meet (satisfy) our needs well enough (sufficiently) before moving on to the next decision.