More on this book
Community
Kindle Notes & Highlights
Complex systems are more spontaneous, more disorderly, more alive than that. At the same time, however, their peculiar dynamism is also a far cry from the weirdly unpredictable gyrations known as chaos.
Mikael Björkqvist liked this
And yet chaos by itself doesn’t explain the structure, the coherence, the self-organizing cohesiveness of complex systems. Instead, all these complex systems have somehow acquired the ability to bring order and chaos into a special kind of balance. This balance point—often called the edge of chaos—is were the components of a system never quite lock into place, and yet never quite dissolve into turbulence, either. The edge of chaos is where life has enough stability to sustain itself and enough creativity to deserve the name of life. The edge of chaos is where new ideas and innovative genotypes
...more
The edge of chaos is the constantly shifting battle zone between stagnation and anarchy, the one place where a complex system can be spontaneous, adaptive, and alive.
Mikael Björkqvist liked this
It didn’t take very long for Arthur to realize that, when it came to real-world complexities, the elegant equations and the fancy mathematics he’d spent so much time on in school were no more than tools—and limited tools at that. The crucial skill was insight, the ability to see connections.
The economists of the 1930s and 1940s were long on insight, but they were often a trifle weak on logic. And even when they weren’t, you’d still find that they came to very different conclusions on the same problem: it turns out they were arguing from different, unstated assumptions.
“I realized that I had been terribly unsophisticated about biology,” he says. “When you’re trained the way I was, in mathematics and engineering and economics, you tend to view science as something that only applies when you can use theorems and mathematics. But when it came to looking out the window at the domain of life, of organisms, of nature, I had this view that, somehow, science stops short.”
A living cell—although much too complicated to analyze mathematically—is a self-organizing system that survives by taking in energy in the form of food and excreting energy in the form of heat and waste.
In fact, wrote Prigogine in one article, it’s conceivable that the economy is a self-organizing system, in which market structures are spontaneously organized by such things as the demand for labor and the demand for goods and services.
Prigogine’s central point was that self-organization depends upon self-reinforcement: a tendency for small effects to become magnified when conditions are right, instead of dying away.
Positive feedback seemed to be the sine qua non of change, of surprise, of life itself.
Arrow to be an affable, open-minded man who loved nothing better than a good debate, and who could still be your friend after tearing your arguments to shreds. No, it was just that—well, talking to Arrow was like talking to the pope.
Everything affects everything else, and you have to understand that whole web of connections.
Otherwise, they’re going to hear something like, “Joe, this is a great idea—too bad it’s not our department.” And everybody has to get papers accepted for publication in established scholarly journals—which are almost invariably going to restrict themselves to papers in a recognized specialty.
More and more over the past decade, he’d begun to sense that the old reductionist approaches were reaching a dead end, and that even some of the hard-core physical scientists were getting fed up with mathematical abstractions that ignored the real complexities of the world. They seemed to be half-consciously groping for a new approach—and in the process, he thought, they were cutting across the traditional boundaries in a way they hadn’t done in years. Maybe centuries.
In part because of their computer simulations, and in part because of new mathematical insights, physicists had begun to realize by the early 1980s that a lot of messy, complicated systems could be described by a powerful theory known as “nonlinear dynamics.” And in the process, they had been forced to face up to a disconcerting fact: the whole really can be greater than the sum of its parts.
Now, for most people that fact sounds pretty obvious. It was disconcerting for the physicists only because they had spent the past 300 years having a love affair with linear systems—in which the whole is precisely equal to the sum of its parts. In fairness, they had had plenty of reason to feel this way. If a system is precisely equal to the sum of its parts, then each component is free to do its own thing regardless of what’s happening elsewhere.
(The name “linear” refers to the fact that if you plot such an equation on graph paper, the plot is a straight line.) Besides, an awful lot of nature does seem to work that way. Sound is a linear system, which is why we can hear an oboe...
This highlight has been truncated due to consecutive passage length restrictions.
And the mathematical expression of that property—to the extent that such systems can be described by mathematics at all—is a nonlinear equation: one whose graph is curvy. Nonlinear equations are notoriously difficult to solve by hand, which is why scientists tried to avoid them for so long. But that is precisely where computers came in.
As soon as scientists started playing with these machines back in the 1950s and 1960s, they realized that a computer couldn’t care less about linear versus nonlinear. It would just grind out the solution either way. And as they started to take advantage of that fact, applying that computer power to more and more kinds of nonlinear equations, they began to find strange, wonderful behaviors that their experience with linear systems had never prepared them for.
The passage of a water wave down a shallow canal, for example, turned out to have profound connections to certain subtle dynamics in quantum field theory: both were examples of isolated, self-sustaining pulses of energy called solitons. The Great Red Spot on Jupiter may be another such soliton. A swirl...
This highlight has been truncated due to consecutive passage length restrictions.
indeed, the self-organized motion in a simmering pot of soup turned out to be governed by dynamics very similar to the nonlinear formation of other kinds of patterns, such as the stripes of a zebra or the spots on a butterfly’s wings.
But most startling of all was the nonlinear phenomenon known as chaos. In the everyday world of human affairs, no one is surprised to learn that a tiny event over here can have an enormous effect over there.
For want of a nail, the shoe was lost, et cetera. But when the physicists started paying serious attention to nonlinear systems in their own domain, they began to realize...
This highlight has been truncated due to consecutive passage length restrictions.
the message was the same: everything is connected, and often with incredible sensitivity. Tiny perturbations won’t always remain tiny. Under the right circumstances, the slightest uncertainty can grow until the system’s future becomes utterly unpredictable—or, in a word, chaotic.
Also known as cognitive science, this was a hot area and getting hotter. When done properly, it combined the talents of neuroscientists studying the detailed wiring of the brain, cognitive psychologists studying the second-by-second process of high-level thinking and reasoning, artificial intelligence researchers trying to model those thinking processes in a computer—even linguists studying the structure of human languages and anthropologists studying human culture. Now that, Rota and Metropolis told Cowan, was an interdisciplinary topic worthy of his institute.
Professor Murray Gell-Mann of Caltech, the fifty-five-year-old enfant terrible of particle physics. Gell-Mann had called up Cowan about a week before the August 17 meeting, saying that Pines had told him about the institute idea. Gell-Mann thought it was fantastic. He’d been wanting to do something like this all his life, he said. He wanted to tackle problems like the rise and fall of ancient civilizations and the long-term sustainability of our own civilization—problems that would transcend the disciplinary boundaries in a big way. He’d had no success whatsoever getting anything started at
...more
So, of course, when Pines nominated him to spearhead the effort, he said, “Yes.” Cowan had already given it some thought, since Pines had talked to him about the nomination beforehand. And what had finally persuaded him was the same thing that had always lured him into management positions at Los Alamos: “Management was stuff that other people could do—but I always felt that maybe they were doing it wrong.” Besides, nobody else was exactly frothing at the mouth to step forward.
It was that Holland’s whole way of looking at things had a unity, a clarity, a rightness that made you slap your forehead and say, “Of course! Why didn’t I think of that?” Holland’s ideas produced a shock of recognition, the kind that made more ideas start exploding in your own brain.
“Sentence by sentence,” says Arthur, “Holland was answering all kinds of questions I’d been asking myself for years: What is adaptation? What is emergence? And many more questions that I never realized I’d been asking.”
“Evolution, Games, and Learning,”
Within months they were talking about the institute’s program being not just complex systems, but complex adaptive systems. And Holland’s personal intellectual agenda—to understand the intertwining processes of emergence and adaptation—essentially became the agenda of the institute as a whole. He was accordingly given star billing at one of the institute’s first attempts at a large-scale meeting, the Complex Adaptive Systems workshop organized in August 1986 by Jack Cowan and Stanford biologist Marc Feldman.
But think about what that means in terms of chess. In the mathematical theory of games there is a theorem telling you that any finite, two-person, zero-sum game—such as chess—has an optimal solution. That is, there is a way of choosing moves that will allow each player, black and white, to do better than he would with any other choice of moves.
He learned to play checkers in first grade from his mother, who was also an expert bridge player. Everyone in the family was a passionate sailor, and both Holland and his mother frequently competed in regattas.
His father was a first-class gymnast—Holland himself spent several years at that in junior high school—and an avid outdoorsman. Holland’s family was always playing something. Bridge, golf, croquet, checkers, chess, Go—you name it.
At the time, of course, nobody knew to call this sort of thing “artificial intelligence” or “cognitive science.” But even so, the very act of programming computers—itself a totally new kind of endeavor—was forcing people to think much more carefully than ever before about what it meant to solve a problem. A computer was the ultimate Martian: you had to tell it everything: What are the data? How are they transformed? What are the steps to get from here to there? Those questions, in turn, led very quickly to issues that had bedeviled philosophers for centuries: What is knowledge? How is it
...more
Communication Sciences program was the kind of environment where such questions could thrive. What is emergence? And what is thinking? How does it work? What are its laws? What does it really mean for a system to adapt?
But now Holland was beginning to realize just how prescient Samuel’s focus on games had really been. This game analogy seemed to be true of any adaptive system. In economics the payoff is in money, in politics the payoff is in votes, and on and on. At some level, all these adaptive systems are fundamentally the same. And that meant, in turn, that all of them are fundamentally like checkers or chess: the space of possibilities is vast beyond imagining.
By 1962 he had put aside all his other research projects and was devoting himself to it essentially full time. In particular, he was determined to crack this problem of selection based on more than one gene—and not just because Fisher’s independent-gene assumption had bugged him more than anything else about that book.
According to the conventional wisdom, rule-based systems were so flexible that some form of centralized control was needed to prevent anarchy. With hundreds or thousands of rules watching a bulletin board crammed with messages, there was always the chance that several rules would suddenly hop up and start arguing over who got to post the next message.
So to prevent the computer equivalent of schizophrenia, most systems implemented elaborate “conflict resolution” strategies to make sure that only one rule could be active at a time.
Holland, however, saw such top-down conflict resolution as precisely the wrong way to go. Is the world such a simple and predictable place that you always know the best rule in advance? Hardly. And if the system has been told what to do in advance, then it’s a fraud to call the thing artificial intelligence: the intelligence isn’t in the program but in the programmer. No, Holland wanted control to be learned.
“Competition and cooperation may seem antithetical,” he says, “but at some very deep level, they are two sides of the same coin.”
To Holland, the obvious answer was to implement a kind of Hebbian reinforcement. Whenever the agent does something right and gets a positive feedback from the environment, it should strengthen the classifiers responsible. Whenever it does something wrong, it should likewise weaken the classifiers responsible. And either way, it should ignore the classifiers that were irrelevant.
At one time or another, Langton used every name for this phase transition that he could think of: the “transition to chaos,” the “boundary of chaos,” the “onset of chaos.” But the one that really captured the visceral feeling it gave him was “the edge of chaos.”
“It reminded me of the feelings I experienced when I learned to scuba dive in Puerto Rico,” he explains. “For most of our dives we were fairly close to shore, where the water was crystal clear and you could see the bottom perfectly about 60 feet down. However, one day our instructor took us to the edge of the continental shelf, where the 60-foot bottom gave way to an 80-degree slope that disappeared into the depths—I believe at that point the transition was to about 2000 feet. It made you realize that all the diving you had been doing, which had certainly seemed adventurous and daring, was
...more
“Well, life emerged in the oceans,” he adds, “so there you are at the edge, alive and appreciating that enormous fluid nursery. And that’s why ‘the edge of chaos’ carries for me a very similar feeling: because I believe life also originated at the edge of chaos. So here we are at the edge, alive a...
This highlight has been truncated due to consecutive passage length restrictions.
Langton, there’s even a theorem to that effect: the “undecidability theorem” proved by the British logician Alan Turing back in the 1930s. Paraphrased, the theorem essentially says that no matter how smart you think you are, there will always be algorithms that do things you can’t predict in advance. The only way to find out what they will do is to run them.
The essence of the problem is neatly captured by a scenario known as the Prisoners’ Dilemma, which was originally developed in the branch of mathematics called game theory. Two prisoners are being held in separate rooms, goes the story, and the police are interrogating both of them about a crime they committed jointly. Each prisoner has a choice: he can inform on his partner (“defect”) or else remain silent (“cooperate”—with his partner, not the police).
So what do the prisoners do—cooperate or defect? On the face of it, they ought to cooperate with each other and keep their mouths shut, because that way they both get the best result: freedom. But then they get to thinking. Prisoner A, being no fool, quickly realizes that there’s no way he can trust his partner not to turn state’s evidence and walk off with a fat reward, leaving him to pay for the privilege of sitting in a jail cell.
Emergence First, says Farmer, this putative law would have to give a rigorous account of emergence: What does it really mean to say that the whole is greater than the sum of its parts? “It’s not magic,” he says. “But to us humans, with our crude little human brains, it feels like magic.” Flying boids (and real birds) adapt to the actions of their neighbors, thereby becoming a flock. Organisms cooperate and compete in a dance of coevolution, thereby becoming an exquisitely tuned ecosystem. Atoms search for a minimum energy state by forming chemical bonds with each other, thereby becoming the
...more

