More on this book
Community
Kindle Notes & Highlights
But the intervention can become a system trap. A corrective feedback process within the system is doing a poor (or even so-so) job of maintaining the state of the system. A well-meaning and efficient intervenor watches the struggle and steps in to take some of the load.
The trap is formed if the intervention, whether by active destruction or simple neglect, undermines the original capacity of the system to maintain itself. If that capability atrophies, then more of the intervention is needed to achieve the desired effect.
Addiction is finding a quick and dirty solution to the symptom of the problem, which prevents or distracts one from the harder and longer-term task of solving the real problem. Addictive policies are insidious, because they are so easy to sell, so simple to fall for.
Withdrawal means finally confronting the real (and usually much deteriorated) state of the system and taking the actions that the addiction allowed one to put off.
The problem can be avoided up front by intervening in such a way as to strengthen the ability of the system to shoulder its own burdens.
Wherever there are rules, there is likely to be rule beating. Rule beating means evasive action to get around the intent of a system’s rules—abiding by the letter, but not the spirit, of the law. Rule beating becomes a problem only when it leads a system into large distortions, unnatural behaviors that would make no sense at all in the absence of the rules.
That is a warning about needing to design the law with the whole system, including its self-organizing evasive possibilities, in mind.
There are two generic responses to rule beating. One is to try to stamp out the self-organizing response by strengthening the rules or their enforcement—usually giving rise to still greater system distortion. That’s the way further into the trap.
The way out of the trap, the opportunity, is to understand rule beating as useful feedback, and to revise, improve, rescind, or better explain the rules.
If the goal is defined badly, if it doesn’t measure what it’s supposed to measure, if it doesn’t reflect the real welfare of the system, then the system can’t possibly produce a desirable result.
These examples confuse effort with result, one of the most common mistakes in designing systems around the wrong goal.
GNP is a measure of throughput—flows of stuff made and purchased in a year—rather than capital stocks, the houses and cars and computers and stereos that are the source of real wealth and real pleasure.
But governments around the world respond to a signal of faltering GNP by taking numerous actions to keep it growing. Many of those actions are simply wasteful, stimulating inefficient production of things no one particularly wants.
You have the problem of wrong goals when you find something stupid happening “because it’s the rule.” You have the problem of rule beating when you find something stupid happening because it’s the way around the rule. Both of these system perversions can be going on at the same time with regard to the same rule.
the average manager can define the current problem very cogently, identify the system structure that leads to the problem, and guess with great accuracy where to look for leverage points—places in the system where a small change could lead to a large shift in behavior.
although people deeply involved in a system often know intuitively where to find leverage points, more often than not they push the change in the wrong direction
I have come up with no quick or easy formulas for finding leverage points in complex and dynamic systems.
And I know from bitter experience that, because they are so counterintuitive, when I do discover a system’s leverage points, hardly anybody will believe me.
Putting different hands on the faucets may change the rate at which the faucets turn, but if they’re the same old faucets, plumbed into the same old system, turned according to the same old information and goals and rules, the system behavior isn’t going to change much.
Numbers, the sizes of flows, are dead last on my list of powerful interventions. Diddling with the details, arranging the deck chairs on the Titanic. Probably 90—no 95, no 99 percent—of our attention goes to parameters, but there’s not a lot of leverage in them.
If the system is chronically stagnant, parameter changes rarely kick-start it. If it’s wildly variable, they usually don’t stabilize it. If it’s growing out of control, they don’t slow it down.
Parameters become leverage points when they go into ranges that kick off one of the items higher on this list. Interest rates, for example, or birth rates, control the gains around reinforcing feedback loops. System goals are parameters that can make big differences.
You hear about catastrophic river floods much more often than catastrophic lake floods, because stocks that are big, relative to their flows, are more stable than small ones.
You can often stabilize a system by increasing the capacity of a buffer.5 But if a buffer is too big, the system gets inflexible. It reacts too slowly.
There’s leverage, sometimes magical, in changing the size of buffers. But buffers are usually physical entities, not easy to change.
The plumbing structure, the stocks and flows and their physical arrangement, can have an enormous effect on how the system operates.
But often physical rebuilding is the slowest and most expensive kind of change to make in a system. Some stock-and-flow structures are just plain unchangeable.
Physical structure is crucial in a system, but is rarely a leverage point, because changing it is rarely quick or simple. The leverage point is in proper design in the first place.
Delays in feedback loops are critical determinants of system behavior. They are common causes of oscillations.
A system just can’t respond to short-term changes when it has long-term delays. That’s why a massive central-planning system, such as the Soviet Union or General Motors, necessarily functions poorly.
Delays that are too short cause overreaction, “chasing your tail,” oscillations amplified by the jumpiness of the response. Delays that are too long cause damped, sustained, or exploding oscillations, depending on how much too long.
I would list delay length as a high leverage point, except for the fact that delays are not often easily changeable. Things take as long as they take.
Now we’re beginning to move from the physical part of the system to the information and control parts, where more leverage can be found.
A complex system usually has numerous balancing feedback loops it can bring into play, so it can self-correct under different conditions and impacts.
One of the big mistakes we make is to strip away these “emergency” response mechanisms because they aren’t often used and they appear to be costly. In the short term, we see no effect from doing this. In the long term, we drastically narrow the range of conditions over which the system can survive.
The strength of a balancing loop—its ability to keep its appointed stock at or near its goal—depends on the combination of all its parameters and links—the accuracy and rapidity of monitoring, the quickness and power of response, the directness and size of corrective flows. Sometimes there are leverage points here.
Companies and governments are fatally attracted to the price leverage point, but too often determinedly push it in the wrong direction with subsidies, taxes, and other forms of confusion.
These modifications weaken the feedback power of market signals by twisting information in their favor. The real leverage here is to keep them from doing it.
The strength of a balancing feedback loop is important relative to the impact it is designed to correct. If the impact increases in strength, the feedbacks have to be strengthened too.
The power of big industry calls for the power of big government to hold it in check; a global economy makes global regulations necessary.
A balancing feedback loop is self-correcting; a reinforcing feedback loop is self-reinforcing. The more it works, the more it gains power to work some more, driving system behavior in one direction.
Reinforcing feedback loops are sources of growth, explosion, erosion, and collapse in systems. A system with an unchecked reinforcing loop ultimately will destroy itself. That’s why there are so few of them. Usually a balancing loop will kick in sooner or later.
Reducing the gain around a reinforcing loop—slowing the growth—is usually a more powerful leverage point in systems than strengthening balancing loops, and far more preferable than letting the reinforcing loop run.
There are many reinforcing feedback loops in society that reward the winners of a competition with the resources to win even bigger next time—the “success to the successful” trap.
Antipoverty programs are weak balancing loops that try to counter these strong reinforcing ones. It would be much more effective to weaken the reinforcing loops. That’s what progressive income tax, inheritance tax, and universal high-quality public education programs are meant to do.
Missing information flows is one of the most common causes of system malfunction. Adding or restoring information can be a powerful intervention, usually much easier and cheaper than rebuilding physical infrastructure.
The rules of the system define its scope, its boundaries, its degrees of freedom.
Constitutions are the strongest examples of social rules. Physical laws such as the second law of thermodynamics are absolute rules, whether we understand them or not or like them or not. Laws, punishments, incentives, and informal social agreements are progressively weaker rules.
They are high leverage points. Power over the rules is real power.
If you want to understand the deepest malfunctions of systems, pay attention to the rules and to who has power over them.