More on this book
Community
Kindle Notes & Highlights
The general property of noise just mentioned is essential for our purposes in this book, because many of our conclusions are drawn from judgments whose true answer is unknown or even unknowable.
was that guidelines were deeply unfair because they prohibited judges from taking adequate account of the particulars of the case.
judgment is difficult because the world is a complicated, uncertain place.
rampant injustice,
efforts at noise reduction often raise objections and run into serious difficulties. These issues must be addressed, too, or the fight against noise will fail.
They asked forty-two experienced investors in the firm to estimate the fair value of a stock (the price at which the investors would be indifferent to buying or selling). The investors based their analysis on a one-page description of the business; the data included simplified profit and loss, balance sheet, and cash flow statements for the past three years and projections for the next two. Median noise, measured in the same way as in the insurance company, was 41%.
Certainly, positive and negative errors in a judgment about the same case will tend to cancel one another out, and we will discuss in detail how this property can be used to reduce noise. But noisy systems do not make multiple judgments of the same case. They make noisy judgments of different cases. If one insurance policy is overpriced and another is underpriced, pricing may on average look right, but the insurance company has made two costly errors.
The noise audits suggested that respected professionals—and the organizations that employ them—maintained an illusion of agreement while in fact disagreeing in their daily professional judgments.
wherever there is judgment, there is noise, and more of it than you think.
Judgment can therefore be described as measurement in which the instrument is a human mind.
Although accuracy is the goal, perfection in achieving this goal is never achieved even in scientific measurement, much less in judgment. There is always some error, some of which is bias and some of which is noise.
bias is the difference, positive or negative, between the mean of your laps and ten seconds. Noise constitutes the variability of your results, analogous to the scatter of shots
The error equation is the intellectual foundation of this book.
Occasion noise is the variability among these unseen possibilities.
The explanations for group polarization are, in turn, similar to the explanations for cascade effects. Information plays a major role.
clinical judgment. You consider the information, perhaps engage in a quick computation, consult your intuition, and come up with a judgment. In fact, clinical judgment is the process that we have described simply as judgment in this book.
The use of multiple regression is an example of mechanical prediction.
Meehl’s results strongly suggest that any satisfaction you felt with the quality of your judgment was an illusion: the illusion of validity.
The illusion of validity is found wherever predictive judgments are made, because of a common failure to distinguish between two stages of the prediction task: evaluating cases on the evidence available and predicting actual outcomes. You can often be quite confident in your assessment of which of two candidates looks better,
Meehl’s pattern contradicts the subjective experience of judgment, and most of us will trust our experience over a scholar’s claim.
replacing you with a model of you does two things: it eliminates your subtlety, and it eliminates your pattern noise.
The problem is that exceptionally original candidates are, by definition, exceptionally rare.
The advantages of true subtlety are quickly drowned in measurement error.
Their striking finding was that any linear model, when applied consistently to all cases, was likely to outdo human judges in predicting an outcome from the same information.
multiple regression minimizes error in the original data. The formula therefore adjusts itself to predict every random fluke in the data. If, for instance, the sample includes a few managers who have high technical skills and who also performed exceptionally well for unrelated reasons, the model will exaggerate the weight of technical skill.
AI learning depends in the quality of the raw data. MSFT racist bot. Watson for hiring. It's not the algorithm, but the model. Classroom is no different
implication of Dawes’s work deserves to be widely known: you can make valid statistical predictions without prior data about the outcome that you are trying to predict. All you need is a collection of predictors that you can trust to be correlated with the outcome.
The authors concluded that the resistance of clinicians can be explained by a combination of sociopsychological factors, including their “fear of technological unemployment,” “poor education,” and a “general dislike of computers.”
Resistance to algorithms, or algorithm aversion, does not always manifest itself in a blanket refusal to adopt new decision support tools. More often, people are willing to give an algorithm a chance but stop trusting it as soon as they see that it makes mistakes.
We expect machines to be perfect. If this expectation is violated, we discard them.
they trust their gut more than any amount of analysis.
Research in managerial decision making has shown that executives, especially the more senior and experienced ones, resort extensively to something variously called intuition, gut feel, or, simply, judgment
this sense of knowing without knowing why is actually the internal signal of judgment completion that we mentioned in chapter 4.
What makes the internal signal important—and misleading—is that it is construed not as a feeling but as a belief.
While both bias and noise contribute to prediction errors, the largest source of such errors is not the limit on how good predictive judgments are. It is the limit on how good they could be. This limit, which we call objective ignorance,
how often would your ex ante judgment and the ex post evaluations agree?
This intractable uncertainty includes everything that cannot be known at this time about the outcome that you are trying to predict.
Both intractable uncertainty (what cannot possibly be known) and imperfect information (what could be known but isn’t) make perfect prediction impossible. These unknowns are not problems of bias or noise in your judgment; they are objective characteristics of the task. This objective ignorance of important unknowns severely limits achievable accuracy. We take a terminological liberty here, replacing the commonly used uncertainty with ignorance. This term helps limit the risk of confusion between uncertainty, which is about the world and the future, and noise, which is variability in judgments
...more
objective ignorance accumulates steadily the further you look into the future. The limit on expert political judgment is set not by the cognitive limitation of forecasters but by their intractable objective ignorance of the future.
pundits should not be blamed for the failures of their distant predictions. They do, however, deserve some criticism for attempting an impossible task and for believing they can succeed in it.