More on this book
Community
Kindle Notes & Highlights
the financial benefits of self-employment are mediocre: given the same qualifications, people achieve higher average returns by selling their skills to employers than by setting out on their own.
The damage caused by overconfident CEOs is compounded when the business press anoints them as celebrities; the evidence indicates that prestigious press awards to the CEO are costly to stockholders.
The optimistic risk taking of entrepreneurs surely contributes to the economic dynamism of a capitalistic society, even if most risk takers end up disappointed.
It is tempting to explain entrepreneurial optimism by wishful thinking, but emotion is only part of the story. Cognitive biases play an important role, notably the System 1 feature WYSIATI.
We focus on our goal, anchor on our plan, and neglect relevant base rates, exposing ourselves to the planning fallacy. We focus on what we want to do and can do, neglecting the plans and skills of others. Both in explaining the past and in predicting the future, we focus on the causal role of skill and neglect the role of luck. We are therefore prone to an illusion of control. We focus on what we know and neglect what we do not know, which makes us overly confident in our beliefs.
people tend to be overly optimistic about their relative standing on any activity in which they do moderately well.
The truly bad news is that the CFOs did not appear to know that their forecasts were worthless.
Overconfidence is another manifestation of WYSIATI: when we estimate a quantity, we rely on information that comes to mind and construct a coherent story in which the estimate makes sense. Allowing for the information that does not come to mind—perhaps because one never knew it—is impossible.
As Nassim Taleb has argued, inadequate appreciation of the uncertainty of the environment inevitably leads economic agents to take risks they should avoid.
Overconfidence also appears to be endemic in medicine. A study of patients who died in the ICU compared autopsy results with the diagnosis that physicians had provided while the patients were still alive. Physicians also reported their confidence. The result: “clinicians who were ‘completely certain’ of the diagnosis antemortem were wrong 40% of the time.”
“Generally, it is considered a weakness and a sign of vulnerability for clinicians to appear unsure. Confidence is valued over uncertainty and there is a prevailing censure against disclosing uncertainty to patients.”
Dan Lovallo and I coined the phrase “bold forecasts and timid decisions” to describe the background of risk taking.
In essence, the optimistic style involves taking credit for successes but little blame for failures.
I have always believed that scientific research is another domain where a form of optimism is essential to success: I have yet to meet a successful scientist who lacks the ability to exaggerate the importance of what he or she is doing, and I believe that someone who lacks a delusional sense of significance will wilt in the face of repeated experiences of multiple small failures and rare successes, the fate of most researchers.
Can overconfident optimism be overcome by training? I am not optimistic. There have been numerous attempts to train people to state confidence intervals that reflect the imprecision of their judgments, with only a few reports of modest success.
premortem. The procedure is simple: when the organization has almost come to an important decision but has not formally committed itself, Klein proposes gathering for a brief session a group of individuals who are knowledgeable about the decision. The premise of the session is a short speech: “Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster.”
The premortem has two main advantages: it overcomes the groupthink that affects many teams once a decision appears to have been made, and it unleashes the imagination of knowledgeable individuals in a much-needed direction.
The suppression of doubt contributes to overconfidence in a group where only supporters of the decision have a voice.
The premortem is not a panacea and does not provide complete protection against nasty surprises, but it goes some way toward reducing the damage of plans that are subject to the biases of WYSIATI and uncritical optimism.
“The agent of economic theory is rational, selfish, and his tastes do not change.”
Simple gambles (such as “40% chance to win $300”) are to students of decision making what the fruit fly is to geneticists. Choices between such gambles provide a simple model that shares important features with the more complex decisions that researchers actually aim to understand.
We were not trying to figure out the most rational or advantageous choice; we wanted to find the intuitive choice, the one that appeared immediately tempting.
framing effects: the large changes of preferences that are sometimes caused by inconsequential variations in the wording of a choice problem.
During the first five years we spent looking at how people make decisions, we established a dozen facts about choices between risky options. Several of these facts were in flat contradiction to expected utility theory. Some had been observed before, a few were new. Then we constructed a theory that modified expected utility theory just enough to explain our collection of observations. That was prospect theory.
As in Fechner’s law, the psychological response to a change of wealth is inversely proportional to the initial amount of wealth, leading to the conclusion that utility is a logarithmic function of wealth.
Bernoulli pointed out that people do not in fact evaluate gambles in this way.
One hundred years before Fechner, Bernoulli invented psychophysics to explain this aversion to risk. His idea was straightforward: people’s choices are based not on dollar values but on the psychological values of outcomes, their utilities.
The psychological value of a gamble is therefore not the weighted average of its possible dollar outcomes; it is the average of the utilities of these outcomes, each weighted by its probability.
Bernoulli’s insight was that a decision maker with diminishing marginal utility for wealth will be risk averse.
His utility function explained why poor people buy insurance and why richer people sell it to them.
The longevity of the theory is all the more remarkable because it is seriously flawed. The errors of a theory are rarely found in what it asserts explicitly; they hide in what it ignores or tacitly assumes. For an example, take the following scenarios: Today Jack and Jill each have a wealth of 5 million. Yesterday, Jack had 1 million and Jill had 9 million. Are they equally happy? (Do they have the same utility?)
The mystery is how a conception of the utility of outcomes that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind that I have often observed in myself. I call it theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws.
Daniel Gilbert observed, disbelieving is hard work, and System 2 is easily tired.
You know you have made a theoretical advance when you can no longer reconstruct why you failed for so long to see the obvious.
The four problems highlight the weakness of Bernoulli’s model. His theory is too simple and lacks a moving part. The missing variable is the reference point, the earlier state relative to which gains and losses are evaluated.
Evaluation is relative to a neutral reference point, which is sometimes referred to as an “adaptation level.”
A principle of diminishing sensitivity applies to both sensory dimensions and the evaluation of changes of wealth.
The third principle is loss aversion.
We concluded from many such observations that “losses loom larger than gains” and that people are loss averse.
In mixed gambles, where both a gain and a loss are possible, loss aversion causes extremely risk-averse choices. In bad choices, where a sure loss is compared to a larger loss that is merely probable, diminishing sensitivity causes risk seeking.
Prospect theory and utility theory also fail to allow for regret.
The emotions of regret and disappointment are real, and decision makers surely anticipate these emotions when making their choices.
Scientists use theories as a bag of working tools, and they will not take on the burden of a heavier bag unless the new tools are very useful.
First, tastes are not fixed; they vary with the reference point. Second, the disadvantages of a change loom larger than its advantages, inducing a bias that favors the status quo.
Richard Thaler found many examples of what he called the endowment effect, especially for goods that are not regularly traded.
The values were unequal because of loss aversion:
the endowment effect is not universal.
These cases of routine trading are not essentially different from the exchange of a $5 bill for five singles. There is no loss aversion on either side of routine commercial exchanges.
held “for exchange.”
held “for use,”