Noise: A Flaw in Human Judgment
Rate it:
Open Preview
Read between July 23 - December 2, 2021
2%
Flag icon
The general property of noise just mentioned is essential for our purposes in this book, because many of our conclusions are drawn from judgments whose true answer is unknown or even unknowable.
5%
Flag icon
was that guidelines were deeply unfair because they prohibited judges from taking adequate account of the particulars of the case.
5%
Flag icon
judgment is difficult because the world is a complicated, uncertain place.
5%
Flag icon
rampant injustice,
5%
Flag icon
efforts at noise reduction often raise objections and run into serious difficulties. These issues must be addressed, too, or the fight against noise will fail.
6%
Flag icon
They asked forty-two experienced investors in the firm to estimate the fair value of a stock (the price at which the investors would be indifferent to buying or selling). The investors based their analysis on a one-page description of the business; the data included simplified profit and loss, balance sheet, and cash flow statements for the past three years and projections for the next two. Median noise, measured in the same way as in the insurance company, was 41%.
José Antonio Lopez
Harvard cases
7%
Flag icon
Certainly, positive and negative errors in a judgment about the same case will tend to cancel one another out, and we will discuss in detail how this property can be used to reduce noise. But noisy systems do not make multiple judgments of the same case. They make noisy judgments of different cases. If one insurance policy is overpriced and another is underpriced, pricing may on average look right, but the insurance company has made two costly errors.
José Antonio Lopez
Taguchi loss function
7%
Flag icon
The noise audits suggested that respected professionals—and the organizations that employ them—maintained an illusion of agreement while in fact disagreeing in their daily professional judgments.
7%
Flag icon
noise is a consequence of the informal nature of judgment.
José Antonio Lopez
Not necessary
7%
Flag icon
wherever there is judgment, there is noise, and more of it than you think.
9%
Flag icon
Measurement, in everyday life as in science, is the act of using an instrument to assign a value on a scale to an object or event.
José Antonio Lopez
Measuring is comparing
9%
Flag icon
Judgment can therefore be described as measurement in which the instrument is a human mind.
9%
Flag icon
Although accuracy is the goal, perfection in achieving this goal is never achieved even in scientific measurement, much less in judgment. There is always some error, some of which is bias and some of which is noise.
9%
Flag icon
bias is the difference, positive or negative, between the mean of your laps and ten seconds. Noise constitutes the variability of your results, analogous to the scatter of shots
10%
Flag icon
Why did you choose, say, 65 rather than 61 or 69? Most likely, at some point, a number came to your mind.
José Antonio Lopez
Lykert
10%
Flag icon
Verifiability does not change the experience of judgment. To some degree, you might perhaps
José Antonio Lopez
Acton mba
11%
Flag icon
why making a judgment about a fictitious character like Gambardi feels very much the same as does making a judgment about the real
José Antonio Lopez
Mind doesnt know
11%
Flag icon
or at least not to disagree too much.
José Antonio Lopez
Deming
12%
Flag icon
System noise is inconsistency, and inconsistency damages the credibility of the system.
José Antonio Lopez
MP
14%
Flag icon
The error equation is the intellectual foundation of this book.
17%
Flag icon
Occasion noise is the variability among these unseen possibilities.
23%
Flag icon
The explanations for group polarization are, in turn, similar to the explanations for cascade effects. Information plays a major role.
24%
Flag icon
clinical judgment. You consider the information, perhaps engage in a quick computation, consult your intuition, and come up with a judgment. In fact, clinical judgment is the process that we have described simply as judgment in this book.
24%
Flag icon
The use of multiple regression is an example of mechanical prediction.
25%
Flag icon
Meehl’s results strongly suggest that any satisfaction you felt with the quality of your judgment was an illusion: the illusion of validity.
25%
Flag icon
The illusion of validity is found wherever predictive judgments are made, because of a common failure to distinguish between two stages of the prediction task: evaluating cases on the evidence available and predicting actual outcomes. You can often be quite confident in your assessment of which of two candidates looks better,
25%
Flag icon
Meehl’s pattern contradicts the subjective experience of judgment, and most of us will trust our experience over a scholar’s claim.
26%
Flag icon
replacing you with a model of you does two things: it eliminates your subtlety, and it eliminates your pattern noise.
26%
Flag icon
The problem is that exceptionally original candidates are, by definition, exceptionally rare.
26%
Flag icon
The performance evaluations that could confirm that “originals” end up as superstars are also imperfect.
José Antonio Lopez
Adam Grant's Originals
26%
Flag icon
The advantages of true subtlety are quickly drowned in measurement error.
26%
Flag icon
Their striking finding was that any linear model, when applied consistently to all cases, was likely to outdo human judges in predicting an outcome from the same information.
27%
Flag icon
multiple regression minimizes error in the original data. The formula therefore adjusts itself to predict every random fluke in the data. If, for instance, the sample includes a few managers who have high technical skills and who also performed exceptionally well for unrelated reasons, the model will exaggerate the weight of technical skill.
José Antonio Lopez
AI learning depends in the quality of the raw data. MSFT racist bot. Watson for hiring. It's not the algorithm, but the model. Classroom is no different
27%
Flag icon
implication of Dawes’s work deserves to be widely known: you can make valid statistical predictions without prior data about the outcome that you are trying to predict. All you need is a collection of predictors that you can trust to be correlated with the outcome.
28%
Flag icon
Using only two inputs, they were able to match the validity of an existing tool that uses 137 variables to assess a defendant’s risk level.
José Antonio Lopez
A difference makes a difference when it makes a difference
29%
Flag icon
The authors concluded that the resistance of clinicians can be explained by a combination of sociopsychological factors, including their “fear of technological unemployment,” “poor education,” and a “general dislike of computers.”
José Antonio Lopez
Back then computer power didn't reach high levels of intelligence. Expert systems.
29%
Flag icon
Resistance to algorithms, or algorithm aversion, does not always manifest itself in a blanket refusal to adopt new decision support tools. More often, people are willing to give an algorithm a chance but stop trusting it as soon as they see that it makes mistakes.
29%
Flag icon
why bother with an algorithm you can’t trust?
José Antonio Lopez
Self-driving cars vs human drivers
29%
Flag icon
We expect machines to be perfect. If this expectation is violated, we discard them.
30%
Flag icon
they trust their gut more than any amount of analysis.
30%
Flag icon
Research in managerial decision making has shown that executives, especially the more senior and experienced ones, resort extensively to something variously called intuition, gut feel, or, simply, judgment
30%
Flag icon
this sense of knowing without knowing why is actually the internal signal of judgment completion that we mentioned in chapter 4.
30%
Flag icon
What makes the internal signal important—and misleading—is that it is construed not as a feeling but as a belief.
30%
Flag icon
While both bias and noise contribute to prediction errors, the largest source of such errors is not the limit on how good predictive judgments are. It is the limit on how good they could be. This limit, which we call objective ignorance,
30%
Flag icon
how often would your ex ante judgment and the ex post evaluations agree?
30%
Flag icon
This intractable uncertainty includes everything that cannot be known at this time about the outcome that you are trying to predict.
30%
Flag icon
Both intractable uncertainty (what cannot possibly be known) and imperfect information (what could be known but isn’t) make perfect prediction impossible. These unknowns are not problems of bias or noise in your judgment; they are objective characteristics of the task. This objective ignorance of important unknowns severely limits achievable accuracy. We take a terminological liberty here, replacing the commonly used uncertainty with ignorance. This term helps limit the risk of confusion between uncertainty, which is about the world and the future, and noise, which is variability in judgments ...more
José Antonio Lopez
Bounded rationality
31%
Flag icon
objective ignorance accumulates steadily the further you look into the future. The limit on expert political judgment is set not by the cognitive limitation of forecasters but by their intractable objective ignorance of the future.
31%
Flag icon
pundits should not be blamed for the failures of their distant predictions. They do, however, deserve some criticism for attempting an impossible task and for believing they can succeed in it.
31%
Flag icon
the obviousness of this fact is matched only by the regularity with which it is ignored, as the consistent findings about predictive overconfidence demonstrate.
José Antonio Lopez
It's Not only forecasters, but people who need to believe. The mind can't handle unfinished stories.
« Prev 1 3