More on this book
Community
Kindle Notes & Highlights
To be able to look backwards and say that you’ve “failed” implies that you had goals. So what was it that I was trying to do?
When your method of learning about the world is biased, learning more may not help. Acquiring more data can even consistently worsen a biased prediction.
Real-world rationality isn’t about ignoring your emotions and intuitions. For a human, rationality often means becoming more self-aware about your feelings, so you can factor them into your decisions.
Even when we correctly identify others’ biases, we have a special bias blind spot when it comes to our own flaws.
Epistemic rationality is about building accurate maps instead. This correspondence between belief and reality is commonly called “truth,” and I’m happy to call it that.
Instrumental rationality, on the other hand, is about steering reality—sending the future where you want it to go. It’s the art of choosing actions that lead to outcomes ranked higher in your preferences. I sometimes call this “winning.”
“Rational agents make decisions that maximize the probabilistic expectation of a coherent utility function” is the kind of thought that depends on a concept of (instrumental) rationality,
whereas “It’s rational to eat vegetables” can probably be replaced with “It’s useful to eat vegetables” or “It’s in your interest to eat vegetables.”
Experimental psychologists use two gold standards: probability theory, and decision theory.
Let “P(such-and-such)” stand for “the probability that such-and-such happens,” and P(A,B) for “the probability that both A and B happen.” Since it is a universal law of probability theory that P(A) ≥ P(A,B), the judgment that P(Bill plays jazz) is less than P(Bill plays jazz, Bill is an accountant) is labeled incorrect.
To keep it technical, you would say that this probability judgment is non-Bayesian. Beliefs and actions that are rational in this mathematically well-defined sense are called “Bayesian.”
This does not quite exhaust the problem of what is meant in practice by “rationality,” for two major reasons: First, the Bayesian formalisms in their full form are computationally intractable on most real-world problems. No one can actually calculate and obey the math, any more than you can predict the stock market by calculating the movements of quarks.
Second, sometimes the meaning of the math itself is called into question. The exact rules of probability theory are called into question by, e.g., anthropic problems in which the number of observers is uncertain. The exact rules of decision theory are called into question by, e.g., Newcomblike problems in which other agents may predict your decision before it happens.
So if you understand what concept I am generally getting at with this word “rationality,” and with the sub-terms “epistemic rationality” and “instrumental rationality,” we have communicated: we have accomplished everything there is to accomplish by talking about how to define “rationality.”
If you say, “It’s (epistemically) rational for me to believe X, but the truth is Y,” then you are probably using the word “rational” to mean something other than what I have in mind. (E.g., “rationality” should be consistent under reflection—“rationally” looking at the evidence, and “rationally” considering how your mind processes the evidence, shouldn’t lead to two different conclusions.)
Similarly, if you find yourself saying, “The (instrumentally) rational thing for me to do is X, but the right thing for me to do is Y,” then you are almost certainly using some other meaning for the word “rational” or the word “right.” I use the term “rationality” normatively, to pick out desirable patterns of thought.
For a good introduction to Newcomb’s Problem, see Holt.2 More generally, you can find definitions and explanations for many of the terms in this book at the website wiki.lesswrong.com/wiki/RAZ_Glossary
Jim Holt, “Thinking Inside the Boxes,” Slate (2002), http://www.slate.com/articles/arts/egghead/2002/02/thinkinginside_the_boxes.html
A popular belief about “rationality” is that rationality opposes all emotion—that all our sadness and all our joy are automatically anti-logical by virtue of being feelings. Yet strangely enough, I can’t find any theorem of probability theory which proves that I should appear ice-cold and expressionless.
Becoming more rational—arriving at better estimates of how-the-world-is—can diminish feelings or intensify them.
If your motive is curiosity, you will assign priority to questions according to how the questions, themselves, tickle your personal aesthetic sense.
You get back feedback on which modes of thinking work, and which don’t. Pure curiosity is a wonderful thing, but it may not linger too long on verifying its answers, once the attractive mystery is gone.
Are there motives for seeking truth besides curiosity and pragmatism? The third reason that I can think of is morality: You believe that to seek the truth is noble and important and worthwhile. Though such an ideal also attaches an intrinsic value to truth, it’s a very different state of mind from curiosity. Being curious about what’s behind the curtain doesn’t feel the same as believing that you have a moral duty to look there.
With all that said, we seem to label as “biases” those obstacles to truth which are produced, not by the cost of information, nor by limited computing power, but by the shape of our own mental machinery.
Plato wasn’t “biased” because he was ignorant of General Relativity—he had no way to gather that information, his ignorance did not arise from the shape of his mental machinery.
But if Plato believed that philosophers would make better kings because he himself was a philosopher—and this belief, in turn, arose because of a universal adaptive political instinct for self-promotion,
Biases may not be cheap to correct. They may not even be correctable. But where we look upon our own mental machinery and see a causal account of an identifiable class of errors; and when the problem seems to come from the evolved shape of the machinery, rather from there being too little machinery, or bad specific content; then we call that a bias.
Failurespace is wide, infinite errors in infinite variety. It is difficult to describe so huge a space: “What is true of one apple may not be true of another apple; thus more can be said about a single apple than about all the apples in the world.” Success-space is narrower, and therefore more can be said about it.
In the ancestral environment, much of what you knew, you experienced yourself; or you heard it directly from a fellow tribe-member who had seen it. There was usually at most one layer of selective reporting between you, and the event itself. With today’s Internet, you may see reports that have passed through the hands of six bloggers on the way to you—six successive filters. Compared to our ancestors, we live in a larger world, in which far more happens, and far less of it reaches us—a much stronger selection effect, which can create much larger availability biases.
19% of the planet lives on less than $1/day, and I doubt that one fifth of the blog posts you read are written by them.
events that have never happened are not recalled, and hence deemed to have probability zero. When no flooding has recently occurred (and yet the probabilities are still fairly calculable), people refuse to buy flood insurance even when it is heavily subsidized and priced far below an actuarially fair value.
The conjunction fallacy is when humans rate the probability P(A,B) higher than the probability P(B), even though it is a theorem that P(A,B) ≤ P(B).
Which is to say: Adding detail can make a scenario SOUND MORE PLAUSIBLE, even though the event necessarily BECOMES LESS PROBABLE.
What could the forecasters have done to avoid the conjunction fallacy, without seeing the direct comparison, or even knowing that anyone was going to test them on the conjunction fallacy? It seems to me, that they would need to notice the word “and.” They would need to be wary of it—not just wary, but leap back from it.
When you want to get something done, you have to plan out where, when, how; figure out how much time and how much resource is required; visualize the steps from beginning to successful conclusion. All this is the “inside view,” and it doesn’t take into account unexpected delays and unforeseen catastrophes.
The outside view is when you deliberately avoid thinking about the special, unique features of this project, and just ask how long it took to finish broadly similar projects in the past. This is counterintuitive, since the inside view has so much more detail—there’s a temptation to think that a carefully tailored prediction, taking into account all available data, will give better results.
Buehler et al., reporting on a cross-cultural study, found that Japanese students expected to finish their essays ten days before deadline. They actually finished one day before deadline. Asked when they had previously completed similar tasks, they responded, “one day before deadline.”6 This is the power of the outside view over the inside view.
So there is a fairly reliable way to fix the planning fallacy, if you’re doing something broadly similar to a reference class of previous projects. Just ask how long similar projects have taken in the past, without considering any of the special properties of this project. Better yet, ask an experienced outsider how long similar projects have taken. You’ll get back an answer that sounds hideously long, and clearly reflects no understanding of the special reasons why this particular task will take less time. This answer is true. Deal with it.
As Keysar and Barr note, two days before Germany’s attack on Poland, Chamberlain sent a letter intended to make it clear that Britain would fight if any invasion occurred.7 The letter, phrased in polite diplomatese, was heard by Hitler as conciliatory—and the tanks rolled. Be not too quick to blame those who misunderstand your perfectly clear sentences, spoken or written. Chances are, your words are more ambiguous than you think.
When you explain things in an ancestral environment, you almost never have to explain your concepts. At most you have to explain one new concept, not two or more simultaneously. In the ancestral environment there were no abstract disciplines with vast bodies of carefully gathered evidence generalized into elegant theories transmitted by written books whose conclusions are a hundred inferential steps removed from universally shared background premises.
Combined with the illusion of transparency and self-anchoring, I think this explains a lot about the legendary difficulty most scientists have in communicating with a lay audience—or even communicating with scientists from other disciplines.
When I observe failures of explanation, I usually see the explainer taking one step back, when they need to take two or more steps back. Or listeners assume that things should be visible in one step, when they take two or more steps to explain. Both sides act as if they expect very short inferential distances from universal knowledge to any new knowledge.
A clear argument has to lay out an inferential pathway, starting from what the audience already knows or accepts. If you don’t recurse far enough, you’re just talking to yourself. If at any point you make a statement without obvious justification in arguments you’ve previously supported, the audience just thinks you’re crazy.
Here is the secret of deliberate rationality—this whole process is not magic, and you can understand it. You can understand how you see your shoelaces. You can think about which sort of thinking processes will create beliefs which mirror reality, and which thinking processes will not. Mice can see, but they can’t understand seeing. You can understand seeing, and because of that, you can do things that mice cannot do. Take a moment to marvel at this, for it is indeed marvelous.
Mice see, but they don’t know they have visual cortexes, so they can’t correct for optical illusions. A mouse lives in a mental world that includes cats, holes, cheese and mousetraps—but not mouse brains.
To ask which beliefs make you happy is to turn inward, not outward—it tells you something about yourself, but it is not evidence entangled with the environment. I have nothing against happiness, but it should follow from your picture of the world, rather than tampering with the mental paintbrushes.
If you can see this—if you can see that hope is shifting your first-order thoughts by too large a degree—if you can understand your mind as a mapping engine that has flaws—then you can apply a reflective correction. The brain is a flawed lens through which to see reality. This is true of both mouse brains and human brains. But a human brain is a flawed lens that can understand its own flaws—its systematic errors, its biases—and apply second-order corrections to them. This, in practice, makes the lens far more powerful. Not perfect, but far more powerful.
If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says, “No it does not, for there is no auditory processing in any brain.”