More on this book
Community
Kindle Notes & Highlights
Read between
August 2, 2018 - May 21, 2019
increasing returns: once a new technology starts opening up new niches for other goods and services, the people who fill those niches have every incentive to help that technology grow and prosper. Moreover, this process is a major driving force behind the phenomenon of lock-in: the more niches that spring up dependent on a given technology, the harder it is to change that technology—until something very much better comes along.
The dynamics of his genetic regulatory networks turned out to be a special case of what the physicists were calling "nonlinear dynamics."
stable cycles were like basins—or as the physicists put it, "attractors."
But suppose, thought Kauffman, just suppose that some of these smallish molecules floating around in the primordial soup were able to act as "catalysts"—submicroscopic matchmakers. Chemists see this sort of thing all the time: one molecule, the catalyst,
grabs two other molecules as they go tumbling by and brings them together, so that they can interact and fuse very quickly. Then the catalyst releases the newly wedded pair, grabs another pair, and so on. Chemists also know of a lot of catalyst molecules that act as chemical axe murderers, sidling up to one molecule after another and slicing them apart. Either way, catalysts are the backbone of the modern chemical industry. Gasoline, plastics, dyes, pharmaceuticals—almost none of it would be possible without catalysts.
Most obviously, they agreed, an autocatalytic set was a web of transformations among molecules in precisely the same way that an economy is a web of transformations among goods and services.
Obviously, a network analysis wouldn't help anybody predict precisely what new technologies are going to emerge next week. But it might help economists get statistical and structural measures of the process. When you introduce a new product, for example, how big an avalanche does it typically cause? How many other goods and services does it bring with it, and how many old ones go out? And how do you recognize when a good has become central to an economy, as opposed to being just another hula-hoop?
Kauffman asks. "Maybe you had to get to a critical diversity to then explode. Maybe it's because you've gone from algal mats to something that's a little more trophic and complex, so that there's an explosion of processes acting on processes to make new processes. It's the same thing as in an economy."
increasing returns,
Holland started by pointing out that the economy is an example par excellence of what the Santa Fe Institute had come to call "complex adaptive systems." In the natural world such systems included brains, immune systems, ecologies, cells, developing embryos, and ant colonies. In the human world they included cultural and social systems such as political parties or scientific communities. Once you learned how to recognize them, in fact, these systems were everywhere. But wherever you found them, said Holland, they all seemed to share certain crucial properties.
each of these systems is a network of many "agents" acting in parallel.
each agent finds itself in an environment produced by its interactions with the other agents in the system. It is constantly acting and reacting to what the other agents are doing. And because of that, essentially nothing in its environment is fixed.
the control of a complex adaptive system tends to be highly dispersed.
complex adaptive systems are constantly revising and rearranging their building blocks as they gain experience.
At some deep, fundamental level, said Holland, all these processes of learning, evolution, and adaptation are the same. And one of the fundamental mechanisms of adaptation in any given system is this revision and recombination of the building blocks.
all complex adaptive systems anticipate the future.
complex adaptive systems typically have many niches, each one of which can be exploited by an agent adapted to fill that niche.
Moreover, the very act of filling one niche opens up more niches—
And that, in turn, means that it's essentially meaningless to talk about a complex adaptive system being in equilibrium: the system can never get there. It is always unfolding, always in transition. In fact, if the system ever does reach equilibrium, it isn't just stable. It's dead. And by the same token, said Holland, there's no point in imagining that the agents in the system can ever "optimize" their fitness, or their utility, or whatever. The space of possibilities is too vast; they have no practical way of finding the optimum. The most they can ever do is to change and improve themselves
...more
Indeed, thought Holland, that's what this business of "emergence" was all about: building blocks at one level combining into new building blocks at a higher level. It seemed to be one of the fundamental organizing principles of the world. It certainly seemed to appear in every complex, adaptive system that you looked at.
As Holland thought about it, however, he became convinced that the most important reason lay deeper still, in the fact that a hierarchical, building-block structure utterly transforms a system's ability to learn, evolve, and adapt.
"So if I have a process that can discover building blocks," says Holland, "the combinatorics start working for me instead of against me. I can describe a great many complicated things with relatively few building blocks."
By the mid-1960s, in fact, Holland had proved what he called the schema theorem, the fundamental theorem of genetic algorithms: in the presence of reproduction, crossover, and mutation, almost any compact cluster of genes that provides above-average fitness will grow in the population exponentially. ("Schema" was his term for any specific pattern of genes.)
But to Holland, the concept of prediction and models actually ran far deeper than conscious thought—or for that matter, far deeper than the existence of a brain. "All complex, adaptive systems—economies, minds, organisms—build models that allow them to anticipate the world,"
implicit model:
Human culture is an implicit model, a rich complex of myths and symbols that implicitly define a people's beliefs about their world and their rules for correct behavior.
How can any system, natural or artificial, learn enough about its universe to forecast future events?
Ultimately, says Holland, the answer has to be "no one." Because if there is a programmer lurking in the background—"the ghost in the machine"—then you haven't really explained anything. You've only pushed the mystery off someplace else. But fortunately, he says, there is an alternative: feedback from the environment. This was Darwin's great insight, that an agent can improve its internal models without any paranormal guidance whatsoever. It simply has to try the models out, see how well their predictions work in the real world, and—if it survives the experience—adjust the models to do better
...more
The definitive failure of white progressivism is denying feedback if it’s source is from a marginalized population.
But in cognition, the process is essentially the same: the agents are individual minds, the feedback comes from teachers and direct experience, and the improvement is called learning.
an adaptive agent has to be able to take advantage of what its world is trying to tell it.
Holland had given his adaptive agent one form of learning. But there was another form still missing. It was the difference between exploitation and exploration.
As they later recounted in their 1986 book, Induction, all four of them had independently come to believe that such a theory had to be founded on the three basic principles that happened to be the same three that underlay Holland's classifier system: namely, that knowledge
can be expressed in terms of mental structures that behave very much like rules; that these rules are in competition, so that experience causes useful rules to grow stronger and unhelpful rules to grow weaker; and that plausible new rules are generated from combinations of old rules.
instead of assuming that your economic agents are perfectly rational, why not just model a bunch of them with Holland-style classifier systems and let them learn from experience like real economic agents?
Order → "Complexity" → Chaos where "complexity" referred to the kind of eternally surprising dynamical behavior shown by the Class IV automata. "It immediately brought to mind some kind of phase transition," he says. Suppose you thought of the parameter lambda as being like temperature. Then the Class I and II rules that you found at low values of lambda would correspond to a solid like ice, where the water molecules are rigidly locked into a crystal lattice. The Class III rules that you found at high values of lambda would correspond to a vapor like steam, where the molecules are flying
...more
Langton had learned that by increasing the temperature and pressure enough, you could go from steam to water without ever going through a phase transition at all; in general, gases and liquids are just two aspects of a single fluid phase of matter. So the distinction wasn't a fundamental one, and the resemblance of liquids to the Game of Life was only superficial.
First-order transitions are the kind we're all familiar with: sharp and precise. Raise the temperature of an ice cube past 32°F, for example, and the change from ice to water happens all at once. Basically, what's going on is that the molecules are forced to make an either-or choice between order and chaos. At temperatures below the transition, they are vibrating slowly enough that they can make the decision for crystalline order (ice). At temperatures above the transition, however, the molecules are vibrating so hard that the molecular bonds are breaking faster than they can reform, so they
...more
Second-order phase transitions are much less common in nature, Langton learned. (At least, they are at the temperatures and pressures humans are used to.) But they are much less abrupt, largely because the molecules in such a system don't have to make that either-or choice. They combine chaos and order. Above the transition temperature, for example, most of the molecules are tumbling over one another in a completely chaotic, fluid phase. Yet tumbling among them are myriads of submicroscopic islands of orderly, latticework solid, with molecules constantly dissolving and recrystallizing around
...more
course, if the temperature were taken all the way past the transition, the roles would reverse: the material would go from being a sea of fluid dotted with islands of solid, to being a continent of solid dotted with lakes of fluid. But right at the transition, the balance is perfect: the ordered structures fill a volume precisely equal to that of the chaotic fluid. Order an...
This highlight has been truncated due to consecutive passage length restrictions.
Dynamical Systems: Order → "Complexity" → Chaos
And right at the phase transition? In the material world, a given molecule might wind up in the ordered phase or the fluid phase; there would be no way to tell in advance, because order and chaos are so intimately intertwined at the molecular level. In the von Neumann universe, likewise, the Class IV rules might eventually produce a frozen configuration, or they might not. But either way, Langton says, the phase transition at the edge of chaos would correspond to what computer scientists call "undecidable" algorithms. These are the algorithms that might halt very quickly with certain inputs—
If the Santa Fe economists found the prospect exciting, however, they also found it vaguely disturbing. And the reason, says Arthur, was something that he didn't put his finger on until much later. "Economics, as it is usually practiced, operates in a purely deductive mode," he says. "Every economic situation is first translated into a mathematical exercise, which the economic agents are supposed to solve by rigorous, analytical reasoning. But then here were Holland, the neural net people, and the other machine-learning theorists. And they were all talking about agents that operate in an
...more
prediction isn't the essence of science. The essence is comprehension and explanation. And that's precisely what Santa Fe could hope to do with economics and other social sciences, he said: they could look for the analog of weather fronts—dynamical social phenomena they could understand and explain.
"That shift in viewpoint is very important," says Holland. Indeed, evolutionary biologists consider it so important that they've made up a special word for it: organisms in an ecosystem don't just evolve, they coevolve. Organisms don't change by climbing uphill to the highest peak of some abstract fitness landscape, the way biologists of R. A. Fisher's generation had it. (The fitness-maximizing organisms of classical population genetics actually look a lot like the utility-maximizing
agents of neoclassical economics.) Real organisms constantly circle and chase one another in an infinitely complex dance of coevolution.
At the head of that list was what the English biologist Richard Dawkins called the evolutionary arms race. This is where a plant, say, evolves ever tougher surfaces and ever more noxious chemical repellents to fend off hungry insects, even as the insects are evolving ever stronger jaws and ever more sophisticated chemical resistance mechanisms to press the attack. Also known as the Red Queen hypothesis, in honor of the Lewis Carroll character who told Alice that she had to run as fast as she could to stay in the same place, the evolutionary arms race seems to be a major impetus for
...more
Artificial life, he wrote, is essentially just the inverse of conventional biology. Instead of being an effort to understand life by analysis—dissecting living communities into species, organisms, organs, tissues, cells, organelles, membranes, and finally molecules—artificial life is an effort to understand life by synthesis: putting simple pieces together to generate lifelike behavior in man-made systems. Its credo is that life is not a property of matter per se, but the organization of that matter. Its operating principle is that the laws of life must be laws of dynamical form, independent
...more
But the answer lies with a second great insight, which could be heard at the workshop again and again: living systems are machines, all right, but machines with a very different kind of organization from the ones we're used to. Instead of being designed from the top down, the way a human engineer would do it, living systems always seem to emerge from the bottom up, from a population of much simpler systems.