More on this book
Community
Kindle Notes & Highlights
The contest yielded a clear-cut winner: people who had just listed twelve instances rated themselves as less assertive than people who had listed only six. Furthermore, participants who had been asked to list twelve cases in which they had not behaved assertively ended up thinking of themselves as quite assertive!
Self-ratings were dominated by the ease with which examples had come to mind. The experience of fluent retrieval of instances trumped the number retrieved.
As I have described it, the process that leads to judgment by availability appears to involve a complex chain of reasoning. The subjects have an experience of diminishing fluency as they produce instances. They evidently have expectations about the rate at which fluency decreases, and those expectations are wrong: the difficulty of coming up with new instances increases more rapidly than they expect. It is the unexpectedly low fluency that causes people who were asked for twelve instances to describe themselves as unassertive. When the surprise is eliminated, low fluency no longer influences
...more
System 2 can reset the expectations of System 1 on the fly, so that an event that would normally be surprising is now almost normal. Suppose you are told that the three-year-old boy who lives next door frequently wears a top hat in his stroller. You will be far less surprised when you actually see him with his top hat than you would have been without the warning.
The conclusion is that the ease with which instances come to mind is a System 1 heuristic, which is replaced by a focus on content when System 2 is more engaged. Multiple lines of evidence converge on the conclusion that people who let themselves be guided by System 1 are more strongly susceptible to availability biases than others who are in a state of higher vigilance. The following are some conditions in which people “go with the flow” and are affected more strongly by ease of retrieval than by the content they retrieved:
Speaking of Availability “Because of the coincidence of two planes crashing last month, she now prefers to take the train. That’s silly. The risk hasn’t really changed; it is an availability bias.” “He underestimates the risks of indoor pollution because there are few media stories on them. That’s an availability effect. He should look at the statistics.” “She has been watching too many spy movies recently, so she’s seeing conspiracies everywhere.” “The CEO has had several successes in a row, so failure doesn’t come easily to her mind. The availability bias is making her overconfident.”
The world in our heads is not a precise replica of reality; our expectations about the frequency of events are distorted by the prevalence and emotional intensity of the messages to which we are exposed.
The affect heuristic is an instance of substitution, in which the answer to an easy question (How do I feel about it?) serves as an answer to a much harder question (What do I think about it?).
“The emotional tail wags the rational dog.” The affect heuristic simplifies our lives by creating a world that is much tidier than reality. Good technologies have few costs in the imaginary world we inhabit, bad technologies have no benefits, and all decisions are easy. In the real world, of course, we often face painful tradeoffs between benefits and costs.
“Risk” does not exist “out there,” independent of our minds and culture, waiting to be measured. Human beings have invented the concept of “risk” to help them understand and cope with the dangers and uncertainties of life. Although these dangers are real, there is no such thing as “real risk” or “objective risk.”
Speaking of Availability Cascades “She’s raving about an innovation that has large benefits and no costs. I suspect the affect heuristic.” “This is an availability cascade: a nonevent that is inflated by the media and the public until it fills our TV screens and becomes all anyone is talking about.”
The question about probability (likelihood) was difficult, but the question about similarity was easier, and it was answered instead. This is a serious mistake, because judgments of similarity and probability are not constrained by the same logical rules. It is entirely acceptable for judgments of similarity to be unaffected by base rates and also by the possibility that the description was inaccurate, but anyone who ignores base rates and the quality of evidence in probability assessments will certainly make mistakes.
Judging probability by representativeness has important virtues: the intuitive impressions that it produces are often—indeed, usually—more accurate than chance guesses would be.
One sin of representativeness is an excessive willingness to predict the occurrence of unlikely (low base-rate) events.
The second sin of representativeness is insensitivity to the quality of evidence. Recall the rule of System 1: WYSIATI.
You surely understand in principle that worthless information should not be treated differently from a complete lack of information, but WYSIATI makes it very difficult to apply that principle. Unless you decide immediately to reject evidence (for example, by determining that you received it from a liar), your System 1 will automatically process the information available as if it were true.
The combination of WYSIATI and associative coherence tends to make us believe in the stories we spin for ourselves. The essential keys to disciplined Bayesian reasoning can be simply summarized: Anchor your judgment of the probability of an outcome on a plausible base rate. Question the diagnosticity of your evidence.
Speaking of Representativeness “The lawn is well trimmed, the receptionist looks competent, and the furniture is attractive, but this doesn’t mean it is a well-managed company. I hope the board does not go by representativeness.” “This start-up looks as if it could not fail, but the base rate of success in the industry is extremely low. How do we know this case is different?” “They keep making the same mistake: predicting rare events from weak evidence. When the evidence is weak, one should stick with the base rates.” “I know this report is absolutely damning, and it may be based on solid
...more
When you specify a possible event in greater detail you can only lower its probability. The problem therefore sets up a conflict between the intuition of representativeness and the logic of probability.
uncritical substitution of plausibility for probability has pernicious effects on judgments when scenarios are used as tools of forecasting.
This is a trap for forecasters and their clients: adding detail to scenarios makes them more persuasive, but less likely to come true.
The solution to the puzzle appears to be that a question phrased as “how many?” makes you think of individuals, but the same question phrased as “what percentage?” does not.
The laziness of System 2 is an important fact of life, and the observation that representativeness can block the application of an obvious logical rule is also of some interest.
Speaking of Less is More “They constructed a very complicated scenario and insisted on calling it highly probable. It is not—it is only a plausible story.” “They added a cheap gift to the expensive product, and made the whole deal less attractive. Less is more in this case.” “In most situations, a direct comparison makes people more careful and more logical. But not always. Sometimes intuition beats logic even when the correct answer stares you in the face.”
Statistical base rates are facts about a population to which a case belongs, but they are not relevant to the individual case. Causal base rates change your view of how the individual case came to be.
Subjects’ unwillingness to deduce the particular from the general was matched only by their willingness to infer the general from the particular.
Taleb introduced the notion of a narrative fallacy to describe how flawed stories of the past shape our views of the world and our expectations for the future. Narrative fallacies arise inevitably from our continuous attempt to make sense of the world.
Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance.
A general limitation of the human mind is its imperfect ability to reconstruct past states of knowledge, or beliefs that have changed. Once you adopt a new view of the world (or of any part of it), you immediately lose much of your ability to recall what you used to believe before your mind changed.
Your inability to reconstruct past beliefs will inevitably cause you to underestimate the extent to which you were surprised by past events.
This outcome bias makes it almost impossible to evaluate a decision properly—in terms of the beliefs that were reasonable when the decision was made.
Philip Tetlock, a psychologist at the University of Pennsylvania, explored these so-called expert predictions in a landmark twenty-year study, which he published in his 2005 book Expert Political Judgment: How Good Is It? How Can We Know? Tetlock has set the terms for any future discussion of this topic.
quoted Herbert Simon’s definition of intuition in the introduction, but it will make more sense when I repeat it now: “The situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer. Intuition is nothing more and nothing less than recognition.”
third lesson, which I call irrational perseverance: the folly we displayed that day in failing to abandon the project. Facing a choice, we gave up rationality rather than give up the enterprise.
In the competition with the inside view, the outside view doesn’t stand a chance.
Amos and I coined the term planning fallacy to describe plans and forecasts that are unrealistically close to best-case scenarios could be improved by consulting the statistics of similar cases
The prevalent tendency to underweight or ignore distributional information is perhaps the major source of error in forecasting. Planners should therefore make every effort to frame the forecasting problem so as to facilitate utilizing all the distributional information that is available.
The forecasting method that Flyvbjerg applies is similar to the practices recommended for overcoming base-rate neglect: Identify an appropriate reference class (kitchen renovations, large railway projects, etc.). Obtain the statistics of the reference class (in terms of cost per mile of railway, or of the percentage by which expenditures exceeded budget). Use the statistics to generate a baseline prediction. Use specific information about the case to adjust the baseline prediction, if there are particular reasons to expect the optimistic bias to be more or less pronounced in this project than
...more
A well-run organization will reward planners for precise execution and penalize them for failing to anticipate difficulties, and for failing to allow for difficulties that they could not have anticipated—the unknown unknowns.
The planning fallacy is only one of the manifestations of a pervasive optimistic bias. Most of us view the world as more benign than it really is, our own attributes as more favorable than they truly are, and the goals we adopt as more achievable than they are likely to be. We also tend to exaggerate our ability to forecast the future, which fosters optimistic overconfidence.
The evidence suggests that an optimistic bias plays a role—sometimes the dominant role—whenever individuals or institutions voluntarily take on significant risks. More often than not, risk takers underestimate the odds they face, and do not invest sufficient effort to find out what the odds are.
The misguided acquisitions have been explained by a “hubris hypothesis”: the executives of the acquiring firm are simply less competent than they think they are.
We focus on our goal, anchor on our plan, and neglect relevant base rates, exposing ourselves to the planning fallacy. We focus on what we want to do and can do, neglecting the plans and skills of others. Both in explaining the past and in predicting the future, we focus on the causal role of skill and neglect the role of luck. We are therefore prone to an illusion of control. We focus on what we know and neglect what we do not know, which makes us overly confident in our beliefs.
The consequence of competition neglect is excess entry: more competitors enter the market than the market can profitably sustain, so their average outcome is a loss. The outcome is disappointing for the typical entrant in the market, but the effect on the economy as a whole could well be positive.
However, optimism is highly valued, socially and in the market; people and firms reward the providers of dangerously misleading information more than they reward truth tellers.
An unbiased appreciation of uncertainty is a cornerstone of rationality—but it is not what people and organizations want.
He labels his proposal the premortem. The procedure is simple: when the organization has almost come to an important decision but has not formally committed itself, Klein proposes gathering for a brief session a group of individuals who are knowledgeable about the decision. The premise of the session is a short speech: “Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster.”
The premortem has two main advantages: it overcomes the groupthink that affects many teams once a decision appears to have been made, and it unleashes the imagination of knowledgeable individuals in a much-needed direction.
theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws.