Map and Territory (Rationality: From AI to Zombies Book 1)
Rate it:
Open Preview
6%
Flag icon
Once upon a time, three groups of subjects were asked how much they would pay to save 2,000 / 20,000 / 200,000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88.1 This is scope insensitivity or scope neglect : the number of birds saved—the scope of the altruistic action—had little effect on willingness to pay.
6%
Flag icon
The moral: If you want to be an effective altruist, you have to think it through with the part of your brain that processes those unexciting inky zeroes on paper, not just the part that gets real worked up about that poor struggling oil-soaked bird.
7%
Flag icon
Very recently—in just the last few decades—the human species has acquired a great deal of new knowledge about human rationality. The most salient example would be the heuristics and biases program in experimental psychology. There is also the Bayesian systematization of probability theory and statistics; evolutionary psychology; social psychology. Experimental investigations of empirical human psychology; and theoretical probability theory to interpret what our experiments tell us; and evolutionary theory to explain the conclusions.
8%
Flag icon
We do have a native instinct for introspection. The inner eye isnt sightless, though it sees blurrily, with systematic distortions. We need, then, to apply the science to our intuitions, to use the abstract knowledge to correct our mental movements and augment our metacognitive skills.
8%
Flag icon
The availability heuristic is judging the frequency or probability of an event by the ease with which examples of the event come to mind.
10%
Flag icon
The conjunction fallacy is when humans assign a higher probability to a proposition of the form “A and B” than to one of the propositions “A” or “B” in isolation, even though it is a theorem that conjunctions are never likelier than their conjuncts.
10%
Flag icon
A long series of cleverly designed experiments, which weeded out alternative hypotheses and nailed down the standard interpretation, confirmed that conjunction fallacy occurs because we “substitute judgment of representativeness for judgment of probability.”
10%
Flag icon
Which is to say: Adding detail can make a scenario sound more plausible, even though the event necessarily becomes less probable.
11%
Flag icon
And he said, “Okay, but what does this have to do with—” And I said, “It is more probable that universes replicate for any reason, than that they replicate via black holes because advanced civilizations manufacture black holes because universes evolve to make them do it.” And he said, “Oh.”
11%
Flag icon
For it is written: If you can lighten your burden you must do so. There is no straw that lacks the power to break your back.
12%
Flag icon
Epistemic rationality is about building accurate maps instead. This correspondence between belief and reality is commonly called “truth,” and I’m happy to call it that.1
12%
Flag icon
Instrumental rationality, on the other hand, is about steering reality—sending the future where you want it to go. It’s the art of choosing actions that lead to outcomes ranked higher in your preferences. I sometimes call this “winning.”
12%
Flag icon
“True models usually produce better experimental predictions than false models” is a useful generalization, and it’s not one you can make without using a concept like “true” or “accurate.”
12%
Flag icon
Experimental psychologists use two gold standards: probability theory, and decision theory. Probability theory is the set of laws underlying rational belief. The mathematics of probability applies equally to “figuring out where your bookcase is” and “estimating how many hairs were on Julius Caesars head,” even though our evidence for the claim “Julius Caesar was bald” is likely to be more complicated and indirect than our evidence for the claim “theres a bookcase in my room.” It’s all the same problem of how to process the evidence and observations to update one’s beliefs. Similarly, decision ...more
15%
Flag icon
Unlike most cognitive biases, we know a good debiasing heuristic for the planning fallacy. It won’t work for messes on the scale of the Denver International Airport, but it’ll work for a lot of personal planning, and even some small-scale organizational stuff. Just use an “outside view” instead of an “inside view.”
15%
Flag icon
The outside view is when you deliberately avoid thinking about the special, unique features of this project, and just ask how long it took to finish broadly similar projects in the past.
16%
Flag icon
So there is a fairly reliable way to fix the planning fallacy, if you’re doing something broadly similar to a reference class of previous projects. Just ask how long similar projects have taken in the past, without considering any of the special properties of this project. Better yet, ask an experienced outsider how long similar projects have taken.
18%
Flag icon
P. C. Hodgell said: “That which can be destroyed by the truth should be.”
18%
Flag icon
When something terrible happens, I do not flee my sadness by searching for fake consolations and false silver linings. I visualize the past and future of humankind, the tens of billions of deaths over our history, the misery and fear, the search for answers, the trembling hands reaching upward out of so much blood, what we could become someday when we make the stars our cities, all that darkness and all that light—I know that I can never truly understand it, and I haven’t the words to say.
20%
Flag icon
If there’s a foundational skill in the martial art of rationality, a mental stance on which all other technique rests, it might be this one: the ability to spot, inside your own head, psychological signs that you have a mental map of something, and signs that you don’t.
21%
Flag icon
The rationalist virtue of empiricism consists of constantly asking which experiences our beliefs predict—or better yet, prohibit. Do you believe that phlogiston is the cause of fire? Then what do you expect to see happen, because of that? Do you believe that Wulky Wilkinsen is a retropositional author? Then what do you expect to see because of that? No, not “alienated resublimation”; what experience will happen to you? Do you believe that if a tree falls in the forest, and no one hears it, it still makes a sound? Then what experience must therefore befall you? It is even better to ask: what ...more
21%
Flag icon
When you argue a seemingly factual question, always keep in mind which difference of anticipation you are arguing about. If you can’t find the difference of anticipation, you’re probably arguing about labels in your belief network—or even worse, floating beliefs, barnacles on your network. If you don’t know what experiences are implied by Wulky Wilkinsens writing being retropositional, you can go on arguing forever. Above all, don’t ask what to believe—ask what to anticipate. Every question of belief should flow from a question of anticipation, and that question of anticipation should be the ...more
27%
Flag icon
It finally occurred to me that this woman wasn’t trying to convince us or even convince herself. Her recitation of the creation story wasn’t about the creation of the world at all. Rather, by launching into a five-minute diatribe about the primordial cow, she was cheering for paganism, like holding up a banner at a football game. A banner saying Go Blues isn’t a statement of fact, or an attempt to persuade; it doesn’t have to be convincing—it’s a cheer.
28%
Flag icon
I have so far distinguished between belief as anticipation-controller, belief in belief, professing, and cheering. Of these, we might call anticipation-controlling beliefs “proper beliefs” and the other forms “improper beliefs.” A proper belief can be wrong or irrational, as when someone genuinely anticipates that prayer will cure their sick baby. But the other forms are arguably “not belief at all.” Yet another form of improper belief is belief as group identification—as a way of belonging. Robin Hanson uses the excellent metaphor of wearing unusual clothing, a group uniform like a priest’s ...more
29%
Flag icon
A similar dynamic, I believe, governs the occasions in international diplomacy where Great Powers sternly tell smaller groups to stop that fighting right now. It doesn’t matter to the Great Power who started it—who provoked, or who responded disproportionately to provocation—because the Great Power’s ongoing inconvenience is only a function of the ongoing conflict. Oh, can’t Israel and Hamas just get along? This I call “pretending to be Wise.” Of course there are many ways to try and signal wisdom. But trying to signal wisdom by refusing to make guesses—refusing to sum up evidence—refusing to ...more
29%
Flag icon
Paolo Freire said, “Washing one’s hands of the conflict between the powerful and the
29%
Flag icon
powerless means to side with the powerful, not...
This highlight has been truncated due to consecutive passage length restrictions.
30%
Flag icon
In sum, there’s a difference between: Passing neutral judgment; Declining to invest marginal resources; Pretending that either of the above is a mark of deep wisdom, maturity, and a superior vantage point; with the corresponding implication that the original sides occupy lower vantage points that are not importantly different from up there.
31%
Flag icon
I think it means that you have said the word “democracy,” so the audience is supposed to cheer. It’s not so much a propositional statement or belief, as the equivalent of the “Applause” light that tells a studio audience when to clap.
31%
Flag icon
I am tempted to give a talk sometime that consists of nothing but applause lights, and see how long it takes for the audience to start laughing: I am here to propose to you today that we need to balance the risks and opportunities of advanced artificial intelligence. We should avoid the risks and, insofar as it is possible, realize the opportunities. We should not needlessly confront entirely unnecessary dangers. To achieve these goals, we must plan wisely and rationally. We should not act in fear and panic, or give in to technophobia; but neither should we act in blind enthusiasm. We should ...more
34%
Flag icon
Therefore rational beliefs are contagious, among honest folk who believe each other to be honest. And it’s why a claim that your beliefs are not contagious—that you believe for private reasons which are not transmissible—is so suspicious. If your beliefs are entangled with reality, they should be contagious among honest folk. If your model of reality suggests that the outputs of your thought processes should not be contagious to others, then your model says that your beliefs are not themselves evidence, meaning they are not entangled with reality. You should apply a reflective correction, and ...more
37%
Flag icon
But from a Bayesian perspective, you need an amount of evidence roughly equivalent to the complexity of the hypothesis just to locate the hypothesis in theory-space. It’s not a question of justifying anything to anyone. If there’s a hundred million alternatives, you need at least 27 bits of evidence just to focus your attention uniquely on the correct answer.
37%
Flag icon
Now, how likely is it that Einstein would have exactly enough observational evidence to raise General Relativity to the level of his attention, but only justify assigning it a 55% probability? Suppose General Relativity is a 29.3-bit hypothesis. How likely is it that Einstein would stumble across exactly 29.5 bits of evidence in the course of his physics reading? Not likely! If Einstein had enough observational evidence to single out the correct equations of General Relativity in the first place, then he probably had enough evidence to be damn sure that General Relativity was true.
38%
Flag icon
The formalism of Solomonoff induction measures the “complexity of a description” by the length of the shortest computer program which produces that description as an output.
40%
Flag icon
Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.
41%
Flag icon
Your strength as a rationalist is your ability to be more confused by fiction than by reality; if you are equally good at explaining any outcome you have zero knowledge. The strength of a model is not what it can explain, but what it can’t, for only prohibitions constrain anticipation. If you don’t notice when your model makes the evidence unlikely, you might as well have no model, and also you might as well have no evidence; no brain and no eyes.
46%
Flag icon
In hindsight bias, people who know the outcome of a situation believe the outcome should have been easy to predict in advance. Knowing the outcome, we reinterpret the situation in light of that outcome. Even when warned, we can’t de-interpret to empathize with someone who doesn’t know what we know. Closely related is the illusion of transparency : We always know what we mean by our words, and so we expect others to know it too. Reading our own writing, the intended interpretation falls easily into place, guided by our knowledge of what we really meant. It’s hard to empathize with someone who ...more
47%
Flag icon
Homo sapiens’s environment of evolutionary adaptedness (a.k.a. EEA or “ancestral environment”) consisted of hunter-gatherer bands of at most 200 people, with no writing. All inherited knowledge was passed down by speech and memory. In a world like that, all background knowledge is universal knowledge. All information not strictly private is public, period. In the ancestral environment, you were unlikely to end up more than one inferential step away from anyone else. When you discover a new oasis, you don’t have to explain to your fellow tribe members what an oasis is, or why it’s a good idea ...more
51%
Flag icon
The preview for the X-Men movie has a voice-over saying: “In every human being . . . there is the genetic code . . . for mutation.” Apparently you can acquire all sorts of neat abilities by mutation. The mutant Storm, for example, has the ability to throw lightning bolts. I beg you, dear reader, to consider the biological machinery necessary to generate electricity; the biological adaptations necessary to avoid being harmed by electricity; and the cognitive circuitry required for finely tuned control of lightning bolts. If we actually observed any organism acquiring these abilities in one ...more
58%
Flag icon
If you learn something about whether it’s raining, from some source other than observing the sidewalk to be wet, this will send a forward-message from Rain to Sidewalk Wet and raise our expectation of the sidewalk being wet. If you observe the sidewalk to be wet, this sends a backward-message to our belief that it is raining, and this message propagates from Rain to all neighboring nodes except the Sidewalk Wet node. We count each piece of evidence exactly once; no update message ever “bounces” back and forth. The exact algorithm may be found in Judea Pearl’s classic Probabilistic Reasoning in ...more
60%
Flag icon
What distinguishes a semantic stopsign is failure to consider the obvious next question.
63%
Flag icon
In a comedy written by Moliére, a physician explains the power of a soporific by saying that it contains a “dormitive potency.” Same principle. It is a failure of human psychology that, faced with a mysterious phenomenon, we more readily postulate mysterious inherent substances than complex underlying processes.
64%
Flag icon
Therefore I call theories such as vitalism mysterious answers to mysterious questions. These are the signs of mysterious answers to mysterious questions: First, the explanation acts as a curiosity-stopper rather than an anticipation-controller. Second, the hypothesis has no moving parts—the model is not a specific complex mechanism, but a blankly solid substance or force. The mysterious substance or mysterious force may be said to be here or there, to cause this or that; but the reason why the mysterious force behaves thus is wrapped in a blank unity. Third, those who proffer the explanation ...more
65%
Flag icon
A fun exercise is to eliminate the adjective “emergent” from any sentence in which it appears, and see if the sentence says anything different: Before:Human intelligence is an emergent product of neurons firing. After:Human intelligence is a product of neurons firing. Before:The behavior of the ant colony is the emergent outcome of the interactions of many individual ants. After:The behavior of the ant colony is the outcome of the interactions of many individual ants. Even better:A colony is made of ants. We can successfully predict some aspects of colony behavior using models that include ...more
66%
Flag icon
said, “Did you read ‘A Technical Explanation of Technical Explanation’?”1 “Yes,” said Marcello. “Okay,” I said. “Saying ‘complexity’ doesn’t concentrate your probability mass.”
71%
Flag icon
There is a habit of thought which I call the logical fallacy of generalization from fictional evidence. Journalists who, for example, talk about the Terminator movies in a report on AI, do not usually treat Terminator as a prophecy or fixed truth. But the movie is recalled—is available—as if it were an illustrative historical case. As if the journalist had seen it happen on some other planet, so that it might well happen here. There is an inverse error to generalizing from fictional evidence: failing to be sufficiently moved by historical evidence. The trouble with generalizing from fictional ...more
72%
Flag icon
To my former memory, the United States had always existed—there was never a time when there was no United States. I had not remembered, until that time, how the Roman Empire rose, and brought peace and order, and lasted through so many centuries, until I forgot that things had ever been otherwise; and yet the Empire fell, and barbarians overran my city, and the learning that I had possessed was lost. The modern world became more fragile to my eyes; it was not the first modern world. So many mistakes, made over and over and over again, because I did not remember making them, in every era I ...more