Noise: A Flaw in Human Judgment
Rate it:
Open Preview
Read between July 31 - August 28, 2024
2%
Flag icon
“Refugee Roulette.”
2%
Flag icon
decision hygiene.
3%
Flag icon
ukases
6%
Flag icon
More generally, people who deal with organizations expect a system that reliably delivers consistent judgments.
6%
Flag icon
They do not expect
6%
Flag icon
system ...
This highlight has been truncated due to consecutive passage length restrictions.
7%
Flag icon
naive realism,
7%
Flag icon
true lesson, about the ubiquity of system noise, will never be learned.
7%
Flag icon
noise is a consequence of the informal nature of judgment.
7%
Flag icon
Our conclusion is simple: wherever there is judgment, there is noise, and more of it than you think.
7%
Flag icon
“Wherever there is judgment, there is noise—and more of it than we think.”
8%
Flag icon
We have defined noise as undesirable variability in judgments of the same problem.
8%
Flag icon
There is no direct way to observe the presence of noise in singular decisions.
8%
Flag icon
if we think counterfactually, we know for sure that noise is there.
8%
Flag icon
our inability to observe variability would not make the decision less noisy.
8%
Flag icon
From the perspective of noise reduction, a singular decision is a recurrent decision that happens only once.
9%
Flag icon
very concept of judgment involves a reluctant acknowledgment that you can never be certain that a judgment is right.
9%
Flag icon
allow for the possibility that reasonable and competent people might disagree.
10%
Flag icon
Selective attention and selective recall are a source of variability across people.
10%
Flag icon
illustrate two types of noise. The variability of judgments over successive trials with the stopwatch is noise within a single judge (yourself), whereas the variability of judgments of the Gambardi case is noise between different judges.
10%
Flag icon
illustrates within-person reliability, and the second illustrates between-person reliability.
10%
Flag icon
example of a nonverifiable predictive judgment, for two separate reasons: Gambardi is fictitious and the answer is probabilistic.
10%
Flag icon
forecasts may be too long term for the professionals who make them to be brought to account—
10%
Flag icon
fear of being exposed concentrates the mind.
10%
Flag icon
This similarity is important to psychological research, much of which uses made-up problems.
10%
Flag icon
We suggest this feeling is an internal signal of judgment completion, unrelated to any outside information.
10%
Flag icon
The aim of judgment, as you experienced it, was the achievement of a coherent solution.
11%
Flag icon
second way to evaluate judgments.
11%
Flag icon
It consists in evaluating the process of judgment.
11%
Flag icon
Another question that can be asked about the process of judgment is whether it conforms to the principles of logic or probability theory.
11%
Flag icon
All the procedures we recommend in this book to reduce bias and noise aim to adopt the judgment process that would minimize error over an ensemble of similar cases.
11%
Flag icon
Sentencing a felon is not a prediction. It is an evaluative judgment that seeks to match the sentence to the severity of the crime.
11%
Flag icon
trade-offs are resolved by evaluative judgments.
11%
Flag icon
agree that a level of disagreement that turns a judgment into a lottery is problematic.
11%
Flag icon
People who are affected by evaluative judgments expect the values these judgments reflect to be those of the system, not of the individual judges.
11%
Flag icon
System noise is inconsistency, and inconsistency damages the credibility of the system.
12%
Flag icon
All we need to measure noise is multiple judgments of the same problem.
12%
Flag icon
scatter in their forecasts is noise.
12%
Flag icon
decision requires both predictive and evaluative judgments.”
12%
Flag icon
The different errors add up; they do not cancel out.
12%
Flag icon
in professional judgments of all kinds, whenever accuracy is the goal, bias and noise play the same role in the calculation of overall error.
12%
Flag icon
the measurement and reduction of noise should have the same high priority as the measurement and reduction of bias.
12%
Flag icon
Bias is simply the average of errors,
13%
Flag icon
reduce noise, too? How would the value of such an improvement compare with the value of reducing bias?
13%
Flag icon
mean squared error (MSE)—is the average of the squares of the individual errors of measurement.
13%
Flag icon
The mean contains more information; it is affected by the size of the numbers, while the median is affected only by their order.
13%
Flag icon
intuition about the mean being the best estimate is correct,
13%
Flag icon
arithmetic mean as the value for which error is minimized.
13%
Flag icon
MSE:
13%
Flag icon
squaring gives large errors a far greater weight than it gives small ones.
« Prev 1 3 26