More on this book
Community
Kindle Notes & Highlights
Here are the directions for how to get there in four simple steps: Start with an estimate of average GPA. Determine the GPA that matches your impression of the evidence. Estimate the correlation between your evidence and GPA. If the correlation is .30, move 30% of the distance from the average to the matching GPA.
Step 1 gets you the baseline, the GPA you would have predicted if you were told nothing about Julie beyond the fact that she is a graduating senior. In the absence of information, you would have predicted the average. (This is similar to assigning the base-rate probability of business administration graduates when you are told nothing about Tom W.) Step 2 is your intuitive prediction, which matches your evaluation of the evidence. Step 3 moves you from the baseline toward your intuition, but the distance you are allowed to move depends on your estimate of the correlation. You end up, at step
...more
Correcting your intuitive predictions is a task for System 2. Significant effort is required to find the relevant reference category, estimate the baseline prediction, and evaluate the quality of the evidence.
However, we are not all rational, and some of us may need the security of distorted estimates to avoid paralysis.
Extreme predictions and a willingness to predict rare events from weak evidence are both manifestations of System 1.
Regression is also a problem for System 2. The very idea of regression to the mean is alien and difficult to communicate and comprehend.
Matching predictions to the evidence is not only something we do intuitively; it also seems a reasonable thing to do. We will not learn to understand regression from experience. Even when a regression is identified, as we saw in the story of the flight instructors, it will be given a causal interpretation that is almost always wrong.
narrative fallacy to describe how flawed stories of the past shape our views of the world and our expectations for the future.
When an unpredicted event occurs, we immediately adjust our view of the world to accommodate the surprise.
Learning from surprises is a reasonable thing to do, but it can have some dangerous consequences. A general limitation of the human mind is its imperfect ability to reconstruct past states of knowledge, or beliefs that have changed. Once you adopt a new view of the world (or of any part of it), you immediately lose much of your ability to recall what you used to believe before your mind changed.
Your inability to reconstruct past beliefs will inevitably cause you to underestimate the extent to which you were surprised by past events.
Hindsight is especially unkind to decision makers who act as agents for others—physicians, financial advisers, third-base coaches, CEOs, social workers, diplomats, politicians. We are prone to blame decision makers for good decisions that worked out badly and to give them too little credit for successful moves that appear obvious only after the fact.
Actions that seemed prudent in foresight can look irresponsibly negligent in hindsight.
The sense-making machinery of System 1 makes us see the world as more tidy, simple, predictable, and coherent than it really is.
The halo effect and outcome bias combine to explain the extraordinary appeal of books that seek to draw operational morals from systematic examination of successful businesses.
System 1 is designed to jump to conclusions from little evidence—and it is not designed to know the size of its jumps. Because of WYSIATI, only the evidence at hand counts. Because of confidence by coherence, the subjective confidence we have in our opinions reflects the coherence of the story that System 1 and System 2 have constructed. The amount of evidence and its quality do not count for much, because poor evidence can make a very good story.
the most striking part of the story is that our knowledge of the general rule—that we could not predict—had no effect on our confidence in individual cases.
Confidence is a feeling, which reflects the coherence of the information and the cognitive ease of processing it.
our tendency to construct and believe coherent narratives of the past makes it difficult for us to accept the limits of our forecasting ability.
And we cannot suppress the powerful intuition that what makes sense in hindsight today was predictable yesterday. The illusion that we understand the past fosters overconfidence in our ability to predict the future.
Hedgehogs “know one big thing” and have a theory about the world; they account for particular events within a coherent framework, bristle with impatience toward those who don’t see things their way, and are confident in their forecasts. They are also especially reluctant to admit error. For hedgehogs, a failed prediction is almost always “off only on timing” or “very nearly right.”
Foxes, by contrast, are complex thinkers. They don’t believe that one big thing drives the march of history
Instead the foxes recognize that reality emerges from the interactions of many different agents and forces, including blind luck, often producing large and unpredictable
One reason, which Meehl suspected, is that experts try to be clever, think outside the box, and consider complex combinations of features in making their predictions.
Complexity may work in the odd case, but more often than not it reduces validity.
Another reason for the inferiority of expert judgment is that humans are incorrigibly inconsistent in making summary judgments of complex information.
The research suggests a surprising conclusion: to maximize predictive accuracy, final decisions should be left to formulas, especially in low-validity environments.
The logic of multiple regression is unassailable: it finds the optimal formula for putting together a weighted combination of the predictors.
formulas that assign equal weights to all the predictors are often superior, because they are not affected by accidents of sampling.
The important conclusion from this research is that an algorithm that is constructed on the back of an envelope is often good enough to compete with an optimally weighted formula, and certainly good enough to outdo expert judgment.
had read Paul Meehl’s “little book,” which had appeared just a year earlier. I was convinced by his argument that simple, statistical rules are superior to intuitive “clinical” judgments.
learned from this finding a lesson that I have never forgotten: intuition adds value even in the justly derided selection interview, but only after a disciplined collection of objective information and disciplined scoring of separate traits. I set a formula that gave the “close your eyes” evaluation the same weight as the sum of the six trait ratings.
Implementing interview procedures in the spirit of Meehl and Dawes requires relatively little effort but substantial discipline.
First, select a few traits that are prerequisites for success in this position (technical proficiency, engaging personality, reliability, and so on). Don’t overdo it—six dimensions is a good number. The traits you choose should be as independent as possible from each other, and you should feel that you can assess them reliably by asking a few factual questions. Next, make a list of those questions for each trait and think about how you will score it, say on a 1–5 scale. You should have an idea of what you will call “very weak” or “very strong.”
To avoid halo effects, you must collect the information on one trait at a time, scoring each before you move on to the next one. Do not skip around. To evaluate each candidate, add up the six scores. Because you are in charge of the final decision, you should not do a “close your eyes.” Firmly resolve that you will hire the candidate whose final score is the highest, even if there is another one whom you like better—try to resist your wish to invent broken legs to change the ranking.
have engaged in a few “adversarial collaborations,” in which scholars who disagree on the science agree to write a jointly authored paper on their differences, and sometimes conduct research together. In especially tense situations, the research is moderated by an arbiter.
“The situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer. Intuition is nothing more and nothing less than recognition.”
Emotional learning may be quick, but what we consider as “expertise” usually takes a long time to develop.
When do judgments reflect true expertise? When do they display an illusion of validity? The answer comes from the two basic conditions for acquiring a skill: an environment that is sufficiently regular to be predictable an opportunity to learn these regularities through prolonged practice
The accurate intuitions that Gary Klein has described are due to highly valid cues that the expert’s System 1 has learned to use, even if System 2 has not learned to name them.
In contrast, stock pickers and political scientists who make long-term forecasts operate in a zero-validity environment. Their failures reflect the basic unpredictability of the events that they try to forecast.
Some environments are worse than irregular. Robin Hogarth described “wicked” environments, in which professionals are likely to learn...
This highlight has been truncated due to consecutive passage length restrictions.
Remember this rule: intuition cannot be trusted in the absence of stable regularities in the environment.
but skill is much more difficult to acquire by sheer experience because of the long delay between actions and their noticeable outcomes.
Expertise is not a single skill; it is a collection of skills, and the same professional may be highly expert in some of the tasks in her domain while remaining a novice in others.
Short-term anticipation and long-term forecasting are different tasks, and the therapist has had adequate opportunity to learn one but not the other. Similarly, a financial expert may have skills in many aspects of his trade but not in picking stocks, and an expert in the Middle East knows many things but not the future.
In a less regular, or low-validity, environment, the heuristics of judgment are invoked. System 1 is often able to produce quick answers to difficult questions by substitution, creating coherence where there is none. The question that is answered is not the one that was intended, but the answer is produced quickly and may be sufficiently plausible to pass the lax and lenient review of System 2.
“Does he really believe that the environment of start-ups is sufficiently regular to justify an intuition that goes against the base rates?”
vicissitudes.
Facing a choice, we gave up rationality rather than give up the enterprise.