More on this book
Community
Kindle Notes & Highlights
At Los Alamos, in the lee of a great volcanic caldera, the clouds spill across the sky, in random formation, yes, but also not-random, standing in uniform spikes or rolling in regularly furrowed patterns like brain matter.
Clouds represented a side of nature that the mainstream of physics had passed by, a side that was at once, fuzzy and detailed, structured and unpredictable. Feigenbaum thought about such things, quietly and unproductively.
Physiologists found a surprising order in the chaos that develops in the human heart, the prime cause of sudden, unexplained death. Ecologists explored the rise and fall of gypsy moth populations. Economists dug out old stock price data and tried a new kind of analysis.
When Mitchell Feigenbaum began thinking about chaos at Los Alamos, he was one of a handful of scattered scientists, mostly unknown to one another. A mathematician in Berkeley, California, had formed a small group dedicated to creating a new study of “dynamical systems.” A population biologist at Princeton University was about to publish an impassioned plea that all scientists should look at the surprisingly complex behavior lurking in some simple models. A geometer working for IBM was looking for a new word to describe a family of shapes—jagged, tangled, splintered, twisted, fractured—that he
...more
To some physicists chaos is a science of process rather than state, of becoming rather than being.
The first chaos theorists, the scientists who set the discipline in motion, shared certain sensibilities. They had an eye for pattern, especially pattern that appeared on different scales at the same time. They had a taste for randomness and complexity, for jagged edges and sudden leaps. Believers in chaos—and they sometimes call themselves believers, or converts, or evangelists—speculate about determinism and free will, about evolution, about the nature of conscious intelligence.
As one physicist put it: “Relativity eliminated the Newtonian illusion of absolute space and time; quantum theory eliminated the Newtonian dream of a controllable measurement process; and chaos eliminates the Laplacian fantasy of deterministic predictability.”
“We already know the physical laws that govern everything we experience in everyday life…. It is a tribute to how far we have come in theoretical physics that it now takes enormous machines and a great deal of money to perform an experiment whose results we cannot predict.”
Traditionally, when physicists saw complex results, they looked for complex causes. When they saw a random relationship between what goes into a system and what comes out, they assumed that they would have to build randomness into any realistic theory, by artificially adding noise or error. The modern study of chaos began with the creeping realization in the 1960s that quite simple mathematical equations could model systems every bit as violent as a waterfall.
Tiny differences in input could quickly become overwhelming differences in output—a phenomenon given the name “sensitive dependence on initial conditions.”
The machine, a Royal McBee, was a thicket of wiring and vacuum tubes that occupied an ungainly portion of Lorenz’s office, made a surprising and irritating noise, and broke down every week or so. It had neither the speed nor the memory to manage a realistic simulation of the earth’s atmosphere and oceans. Yet Lorenz created a toy weather in 1960 that succeeded in mesmerizing his colleagues.
He was the god of this machine universe, free to choose the laws of nature as he pleased. After a certain amount of undivine trial and error, he chose twelve. They were numerical rules—equations that expressed the relationships between temperature and pressure, between pressure and wind speed.
Thanks to the determinism of physical law, further intervention would then be unnecessary. Those who made such models took for granted that, from present to future, the laws of motion provide a bridge of mathematical certainty. Understand the laws and you understand the universe. That was the philosophy behind modeling weather on a computer.
To most serious meteorologists, forecasting was less than science. It was a seat-of–the-pants business performed by technicians who needed some intuitive ability to read the next day’s weather in the instruments and the clouds. It was guesswork. At centers like M.I.T., meteorology favored problems that had solutions.
Not only did meteorologists scorn forecasting, but in the 1960s virtually all serious scientists mistrusted computers. These souped-up calculators hardly seemed like tools for theoretical science.
Weather forecasting had been waiting two centuries for a machine that could repeat thousands of calculations over and over again by brute force. Only a computer could cash in the Newtonian promise that the world unfolded along a deterministic path, rule-bound like the planets, predictable like eclipses and tides.
Astronomers did not achieve perfection and never would, not in a solar system tugged by the gravities of nine planets, scores of moons and thousands of asteroids, but calculations of planetary motion were so accurate that people forgot they were forecasts.
In these days of Einstein’s relativity and Heisenberg’s uncertainty, Laplace seems almost buffoon-like in his optimism, but much of modern science has pursued his dream. Implicitly, the mission of many twentieth-century scientists—biologists, neurologists, economists—has been to break their universes down into the simplest atoms that will obey scientific rules.
There was always one small compromise, so small that working scientists usually forgot it was there, lurking in a corner of their philosophies like an unpaid bill. Measurements could never be perfect.
Computers rely on the same assumption in guiding spacecraft: approximately accurate input gives approximately accurate output. Economic forecasters rely on this assumption, though their success is less apparent. So did the pioneers in global weather forecasting.
There had been no malfunction. The problem lay in the numbers he had typed. In the computer’s memory, six decimal places were stored: .506127. On the printout, to save space, just three appeared: .506. Lorenz had entered the shorter, rounded-off numbers, assuming that the difference—one part in a thousand—was inconsequential.
A small numerical error was like a small puff of wind—surely the small puffs faded or canceled each other out before they could change important, large-scale features of the weather. Yet in Lorenz’s particular system of equations, small errors proved catastrophic.
Although his equations were gross parodies of the earth’s weather, he had a faith that they captured the essence of the real atmosphere. That first day, he decided that long-range weather forecasting must be doomed.
“The average person, seeing that we can predict tides pretty well a few months ahead would say, why can’t we do the same thing with the atmosphere, it’s just a different fluid system, the laws are about as complicated. But I realized that any physical system that behaved nonperiodically would be unpredictable.”
The intellectual father of this popular notion was Von Neumann, who built his first computer with the precise intention, among other things, of controlling the weather. He surrounded himself with meteorologists and gave breathtaking talks about his plans to the general physics community.
Indeed, by the seventies and eighties, economic forecasting by computer bore a real resemblance to global weather forecasting. The models would churn through complicated, somewhat arbitrary webs of equations, meant to turn measurements of initial conditions—atmospheric pressure or money supply—into a simulation of future trends.
In practice, econometric models proved dismally blind to what the future would bring, but many people who should have known better acted as though they believed in the results.
Computer modeling had indeed succeeded in changing the weather business from an art to a science. The European Centre’s assessments suggested that the world saved billions of dollars each year from predictions that were statistically better than nothing. But beyond two or three days the world’s best forecasts were speculative, and beyond six or seven they were worthless.
Had he stopped with the Butterfly Effect, an image of predictability giving way to pure randomness, then Lorenz would have produced no more than a piece of very bad news. But Lorenz saw more than randomness embedded in his weather model. He saw a fine geometrical structure, order masquerading as randomness.
He turned his attention more and more to the mathematics of systems that never found a steady state, systems that almost repeated themselves but never quite succeeded.
Lorenz saw that there must be a link between the unwillingness of the weather to repeat itself and the inability of forecasters to predict it—a link between aperiodicity and unpredictability.
At first his computer tended to lock into repetitive cycles. But Lorenz tried different sorts of minor complications, and he finally succeeded when he put in an equation that varied the amount of heating from east to west, corresponding to the real-world variation between the way the sun warms the east coast of North America, for example, and the way it warms the Atlantic Ocean. The repetition disappeared.
The Butterfly Effect acquired a technical name: sensitive dependence on initial conditions. And sensitive dependence on initial conditions was not an altogether new notion.
Linear relationships can be captured with a straight line on a graph. Linear relationships are easy to think about: the more the merrier. Linear equations are solvable, which makes them suitable for textbooks. Linear systems have an important modular virtue: you can take them apart, and put them together again—the pieces add up. Nonlinear systems generally cannot be solved and cannot be added together.
Without friction a simple linear equation expresses the amount of energy you need to accelerate a hockey puck. With friction the relationship gets complicated, because the amount of energy changes depending on how fast the puck is already moving.
Nonlinearity means that the act of playing the game has a way of changing the rules. You cannot assign a constant importance to friction, because its importance depends on speed. Speed, in turn, depends on friction. That twisted changeability makes nonlinearity hard to calculate, but it also creates rich kinds of behavior that never occur in linear systems.
If the coffee is just warm, its heat will dissipate without any hydrodynamic motion at all. The coffee remains in a steady state. But if it is hot enough, a convective overturning will bring hot coffee from the bottom of the cup up to the cooler surface. Convection in coffee becomes plainly visible when a little cream is dribbled into the cup.
Lorenz drily told a gathering of scientists, “We might have trouble forecasting the temperature of the coffee one minute in advance, but we should have little difficulty in forecasting it an hour ahead.” The equations of motion that govern a cooling cup of coffee must reflect the system’s destiny. They must be dissipative.
As the fluid underneath becomes hot, it expands. As it expands, it becomes less dense. As it becomes less dense, it becomes lighter, enough to overcome friction, and it pushes up toward the surface. In a carefully designed box, a cylindrical roll develops, with the hot fluid rising around one side and cool fluid sinking down around the other.
Although the Lorenz system did not fully model convection, it did turn out to have exact analogues in real systems. For example, his equations precisely describe an old-fashioned electrical dynamo, the ancestor of modern generators, where current flows through a disc that rotates through a magnetic field.
Water pours in from the top at a steady rate. If the flow of water in the waterwheel is slow, the top bucket never fills up enough to overcome friction, and the wheel never starts turning. (Similarly, in a fluid, if the heat is too low to overcome viscosity, it will not set the fluid in motion.) If the flow is faster, the weight of the top bucket sets the wheel in motion (left). The waterwheel can settle into a rotation that continues at a steady rate (center). But if the flow is faster still (right), the spin can become chaotic, because of nonlinear effects built into the system. As buckets
...more
THE LORENZ ATTRACTOR (on facing page). This magical image, resembling an owl’s mask or butterfly’s wings, became an emblem for the early explorers of chaos. It revealed the fine structure hidden within a disorderly stream of data. Traditionally, the changing values of any one variable could be displayed in a so-called time series (top). To show the changing relationships among three variables required a different technique. At any instant in time, the three variables fix the location of a point in three-dimensional space; as the system changes, the motion of the point represents the
...more
Because the system never exactly repeats itself, the trajectory never intersects itself. Instead it loops around and around forever. Motion on the attractor is abstract, but it conveys the flavor of the motion of the real system.
To make a picture from the data, Lorenz used each set of three numbers as coordinates to specify the location of a point in three-dimensional space. Thus the sequence of numbers produced a sequence of points tracing a continuous path, a record of the system’s behavior. Such a path might lead to one place and stop, meaning that the system had settled down to a steady state, where the variables for speed and temperature were no longer changing.
In the thousands of articles that made up the technical literature of chaos, few were cited more often than “Deterministic Nonperiodic Flow.” For years, no single object would inspire more illustrations, even motion pictures, than the mysterious curve depicted at the end, the double spiral that became known as the Lorenz attractor.
Few laymen realized how tightly compartmentalized the scientific community had become, a battleship with bulkheads sealed against leaks. Biologists had enough to read without keeping up with the mathematics literature—for that matter, molecular biologists had enough to read without keeping up with population biology.
But Lorenz was a meteorologist, and no one thought to look for chaos on page 130 of volume 20 of the Journal of the Atmospheric Sciences.
At high speed the subjects sailed smoothly along. Nothing could have been simpler. They didn’t see the anomalies at all. Shown a red six of spades, they would sing out either “six of hearts” or “six of spades.” But when the cards were displayed for longer intervals, the subjects started to hesitate.
In Kuhn’s scheme, normal science consists largely of mopping up operations. Experimentalists carry out modified versions of experiments that have been carried out many times before. Theorists add a brick here, reshape a cornice there, in a wall of theory. It could hardly be otherwise.
Central to Kuhn’s ideas is the vision of normal science as solving problems, the kinds of problems that students learn the first time they open their textbooks. Such problems define an accepted style of achievement that carries most scientists through graduate school, through their thesis work, and through the writing of journal articles that makes up the body of academic careers.