More on this book
Community
Kindle Notes & Highlights
Foxes evolve to catch rabbits, rabbits evolve to evade foxes; there are as many evolutions as species. But within each species, the blind idiot god is purely obsessed with inclusive genetic fitness. No quality is valued, not even survival, except insofar as it increases reproductive fitness. There’s no point in an organism with steel skin if it ends up having 1% less reproductive capacity.
I can’t predict my friend’s move even as we approach each individual intersection—let alone predict the whole sequence of moves in advance. Yet I can predict the result of my friend’s unpredictable actions: we will arrive at the airport.
Isn’t this a remarkable situation to be in, from a scientific perspective? I can predict the outcome of a process, without being able to predict any of the intermediate steps of the process.
I do need to know something about my friend. I must know that my friend wants me to make my flight. I must credit that my friend is a good enough planner to successfully drive me to the airport (if he wants to). These are properties of my friend’s initial state—properties which let me predict the final destination, though not any intermediate turns.
I must also credit that my friend knows enough about the city to drive successfully. This may be regarded as a relation between my friend and the city; hence, a property of both
To write a culture that isn’t just like your own culture, you have to be able to see your own culture as a special case—not as a norm which all other cultures must take as their point of departure.
we evolved to predict Other Minds by putting ourselves in their shoes, asking what we would do in their situations; for that which was to be predicted, was similar to the predictor.
You might ask: Maybe the aliens do have a sense of humor, but you’re not telling funny enough jokes? This is roughly the equivalent of trying to speak English very loudly, and very slowly, in a foreign country, on the theory that those foreigners must have an inner ghost that can hear the meaning dripping from your words, inherent in your words, if only you can speak them loud enough to overcome whatever strange barrier stands in the way of your perfectly sensible English.
don’t bother explaining why any intelligent mind powerful enough to build complex machines must inevitably have states analogous to emotions. Natural selection builds complex machines without itself having emotions. Now there’s a Real Alien for you—an optimization process that really Does Not Work Like You Do.
You can view both intelligence and natural selection as special cases of optimization: processes that hit, in a large search space, very small targets defined by implicit preferences. Natural selection prefers more efficient replicators. Human intelligences have more complex preferences. Neither evolution nor humans have consistent utility functions, so viewing them as “optimization processes” is understood to be an approximation.
The “improbability” here is improbability relative to a random selection from the design space, not improbability in an absolute sense—if you have an optimization process around, then “improbably” good designs become probable
If you consider biology in the absence of hominids, then on the object level we have things like dinosaurs and butterflies and cats. On the meta level we have things like sexual recombination and natural selection of asexual populations.
The object level, you will observe, is rather more complicated than the meta level. Natural selection is not an easy subject and it involves math. But if you look at the anatomy of a whole cat, the cat has dynamics imme...
This highlight has been truncated due to consecutive passage length restrictions.
Cats have brains, of course, which operate to learn over a lifetime; but at the end of the cat’s lifetime, that information is thrown away, so it does not accumulate. The cumulative effects of cat-brains upon the world as optimizers, therefore, are relatively small.
Or consider a bee brain, or a beaver brain. A bee builds hives, and a beaver builds dams; but they didn’t figure out how to build them from scratch. A beaver can’t figure out how to build a hive, a bee can’t figure out how to build a dam.
Compared to evolution, brains lacked both generality of optimization power (they could not produce the amazing range of artifacts produced by evolution) and cumulative optimization power (their products did not accumulate complexity over time). For more on this theme see Protein Reinforcement and DNA Consequentialism
Very recently, certain animal brains have begun to exhibit both generality of optimization power (producing an amazingly wide range of artifacts, in time scales too short for natural selection to play any significant role) and cumulative optimization power (artifacts of increasing...
This highlight has been truncated due to consecutive passage length restrictions.
The wonder of evolution is not how well it works, but that it works at all without being optimized. This is how optimization bootstrapped itself into the universe—starting, as one would expect, from an extremely inefficient accidental optimization process. Which is not the accidental first replicator, mind you, but the accidental first process of natural selection. Distinguish the object level and the meta level!
Natural selection selects on genes, but generally speaking, the genes do not turn around and optimize natural selection. The invention of sexual recombination is an exception to this rule, and so is the invention of cells and DNA. And you can see both the power and the rarity of such events, by the fact that evolutionary biologists structure entire histories of life on Earth around them.
human beings invent sciences and technologies, but we have not yet begun to rewrite the protected structure of the human brain itself. We have a prefrontal cortex and a temporal cortex and a cerebellum, just like the first inventors of agriculture. We haven’t started to genetically engineer ourselves.
The history of Earth up until now has been a history of optimizers spinning their wheels at a constant rate, generating a constant optimization pressure. And creating optimized products, not at a constant rate, but at an accelerating rate, because of how object-level innovations open up the pathway to other object-level innovations. But that acceleration is taking place with a protected meta level doing the actual optimizing.
“Oh, you can try to tell the AI to be Friendly, but if the AI can modify its own source code, it’ll just remove any constraints you try to place on it.” And where does that decision come from?
“But she still did it because she valued that choice above others—because of the feeling of importance she attached to that decision.”
Even our simple formalism illustrates a sharp distinction between expected utility, which is something that actions have; and utility, which is something that outcomes have.
The philosopher begins by arguing that all your Utilities must be over Outcomes consisting of your state of mind. If this were true, your intelligence would operate as an engine to steer the future into regions where you were happy. Future states would be distinguished only by your state of mind; you would be indifferent between any two futures in which you had the same state of mind. And you would, indeed, be rather unlikely to sacrifice your own life to save another.
When we object that people sometimes do sacrifice their lives, the philosopher’s reply shifts to discussing Expected Utilities over Actions: “The feeling of importance she attached to that decision.” This is a drastic jump that should make us leap out of our chairs in indignation. Trying to convert an Expected_Utility into a Utility would cause an outright error in our programming language. But in English it all sounds the same.
Outcomes don’t lead to Outcomes, only Actions lead to Outcomes.
In moral arguments, some disputes are about instrumental consequences, and some disputes are about terminal values. If your debating opponent says that banning guns will lead to lower crime, and you say that banning guns will lead to higher crime, then you agree about a superior instrumental value (crime is bad), but you disagree about which intermediate events lead to which consequences.
If you say that you want to ban guns in order to reduce crime, it may take a moment to realize that “reducing crime” isn’t a terminal value, it’s a superior instrumental value with links to terminal values for human lives and human happinesses. And then the one who advocates gun rights may have links to the superior instrumental value of “reducing crime” plus a link to a value for “freedom,” which might be a terminal value unto them, or another instrumental value . . .
the moral principle “Death is always a bad thing” is itself a leaky generalization.
If your tribe is faced with a resource squeeze, you could try hopping everywhere on one leg, or chewing off your own toes. These “solutions” obviously wouldn’t work and would incur large costs, as you can see upon examination—but in fact your brain is too efficient to waste time considering such poor solutions; it doesn’t generate them in the first place. Your brain, in its search for high-ranking solutions, flies directly to parts of the solution space like “Everyone in the tribe gets together, and agrees to have no more than one child per couple until the resource squeeze is past.”
Such a low-ranking solution as “Everyone have as many kids as possible, then cannibalize the girls” would not be generated in your search process.
I would do it this way, therefore I infer that evolution will do it this way.
Politics was a feature of the ancestral environment; we are descended from those who argued most persuasively that the tribe’s interest—not just their own interest—required that their hated rival Uglak be executed. We certainly aren’t descended from Uglak, who failed to argue that his tribe’s moral code—not just his own obvious self-interest—required his survival.
Human arguments are not even commensurate with the internal structure of natural selection as an optimization process—human arguments aren’t used in promoting alleles, as human arguments would play a causal role in human politics.
The actual consumers of knowledge are the children—who can’t pay, can’t vote, can’t sit on the committees. Their parents care for them, but don’t sit in the classes themselves; they can only hold politicians responsible according to surface images of “tough on education.” Politicians are too busy being re-elected to study all the data themselves; they have to rely on surface images of bureaucrats being busy and commissioning studies—it may not work to help any children, but it works to let politicians appear caring. Bureaucrats don’t expect to use textbooks themselves, so they don’t care if
...more
C. J. Cherryh said:2 Your sword has no blade. It has only your intention. When that goes astray you have no weapon.
when supposedly selfish people give altruistic arguments in favor of selfishness, or when supposedly altruistic people give selfish arguments in favor of altruism.
Suppose I find a barrel, sealed at the top, but with a hole large enough for a hand. I reach in and feel a small, curved object. I pull the object out, and it’s blue—a bluish egg. Next I reach in and feel something hard and flat, with edges—which, when I extract it, proves to be a red cube. I pull out 11 eggs and 8 cubes, and every egg is blue, and every cube is red. Now I reach in and I feel another egg-shaped object. Before I pull it out and look, I have to guess: What will it look like?
You can’t capture in words all the details of the cognitive concept—as it exists in your mind—that
A robin is bigger than a virus, and smaller than an aircraft carrier—that might be the “volume” dimension. Likewise a robin weighs more than a hydrogen atom, and less than a galaxy; that might be the “mass” dimension.
If you think that’s extravagant, quantum physicists use an infinite-dimensional configuration space, and a single point in that space describes the location of every particle in the universe.