More on this book
Community
Kindle Notes & Highlights
The informality allowed you to work quickly. It also produces variability:
ex ante
If an event that was assigned a probability of 90% fails to happen, the judgment of probability was not necessarily a bad one. After all, outcomes that are just 10% likely to happen end up happening 10% of the time.
nonverifiable predictive...
This highlight has been truncated due to consecutive passage length restrictions.
Many professional judgments are nonverifiable.
Verifiability does not change the experience of judgment.
To some degree, you might perhaps think harder about a problem whose answer will be revealed soon, because the fear of being exposed concentrates the mind. Conversely, you might refuse to give much thought to a problem so hypothetical as to be absurd
internal signal of judgment completion, unrelated to any outside information.
the internal signal is just as available for nonverifiable judgments as it is for real, verifiable ones. This explains why making a judgment about a fictitious character like Gambardi feels very much the same as does making a judgment about the real world.
there is a second way to evaluate judgments. This approach applies both to verifiable and nonverifiable ones. It consists in evaluating the process of judgment.
When we speak of good or bad judgments, we may be speaking either about the output (e.g., the number you produced in the Gambardi case) or about the process—what you did to arrive at that number.
We have contrasted two ways of evaluating a judgment: by comparing it to an outcome and by assessing the quality of the process that led to it. Note that when the judgment is verifiable, the two ways of evaluating it may reach different conclusions in a single case.
Scholars of decision-making offer clear advice to resolve this tension: focus on the process, not on the outcome of a single case. We recognize, however, that this is not standard practice in real life.
Sentencing a felon is not a prediction. It is an evaluative judgment that seeks to match the sentence to the severity of the crime.
final decisions entail trade-offs between the pros and cons of various options, and these trade-offs are resolved by evaluative judgments.
predictive judgments,
decision makers who choose from several strategic options expect colleagues and observers who have the same information and share the same goals to agree with them, or at least not to disagree too much.
Evaluative judgments partly depend on the values and preferences of those making them, but they are not mere matters of taste or opinion.
The observation of noise in predictive judgments always indicates that something is wrong.
Noise in evaluative judgments is problematic for a different reason. In any system in which judges are assumed to be interchangeable and assigned quasi-randomly, large disagreements about the same case violate expectations of fairness and consistency.
“arbitrary cruelties”
System noise is inconsistency, and inconsistency damages the credibility of the system.
noise is undesirable and often measurable.
“A decision requires both predictive and evaluative judgments.”
An important question, therefore, is how, and how much, bias and noise contribute to error.
the standard deviation represents a typical distance from the mean.
would it have been a good idea a year ago—and would it be a good idea now—to reduce noise, too? How would the value of such an improvement compare with the value of reducing bias?
the method of least squares, invented in 1795 by Carl Friedrich Gauss,
Gauss proposed a rule for scoring the contribution of individual errors to overall error. His measure of overall error—called mean squared error (MSE)—is the average of the squares of the individual errors of measurement.
the best estimate is one that minimizes the overall error of the available measurements.
the formula you use to measure overall error should be one that yields the arithmetic mean as the value for which error is minimized.
a key feature of MSE: squaring gives large errors a far greater weight than it gives small ones.
The role of bias and noise in error is easily summarized in two expressions that we will call the error equations.
two components with which you are now familiar: bias—the average error—and a residual “noisy error.” The noisy error is positive when the error is larger than the bias, negative when it is smaller. The average of noisy errors is zero. Nothing new in the first error equation.
noise is the standard deviation of measurements, which is identical to the standard deviation of noisy errors.)
Overall Error (MSE) = Bias2 + Noise2
the Pythagorean theorem.
bias and noise play identical roles in the error equation. They are independent of each other and equally weighted in the determination of overall error.
bias and noise are interchangeable in the error equation, and the decrease in overall error will be the same, regardless of which of the two is reduced.
In terms of overall error, noise and bias are independent: the benefit of reducing noise is the same, regardless of the amount of bias. This notion is highly counterintuitive but crucial.
To help you appreciate what has been accomplished in both panels, the original distribution of errors (from figure 4) is represented by a broken line.
The relevant measure of bias is not the imbalance of positive and negative errors. It is average error, which is the distance between the peak of the bell curve and the true value.
MSE conflicts with common intuitions about the scoring of predictive judgments. To minimize MSE, you must concentrate on avoiding large errors.
the effect of reducing an error from 11cm to 10cm is 21 times as large as the effect of going from an error of 1cm to a perfect hit. Unfortunately, people’s intuitions in this regard are almost the mirror image of what they should be: people are very keen to get perfect hits and highly sensitive to small errors, but they hardly care at all about the difference between two large errors. Even if you sincerely believe that your goal is to make accurate judgments, your emotional reaction to results may be incompatible with the achievement of accuracy as science defines it.
if GoodSell decides to reduce noise, the fact that noise reduction makes bias more visible—indeed, impossible to miss—may turn out to be a blessing. Achieving noise reduction will ensure that bias reduction is next on the company’s agenda. Admittedly, reducing noise would be less of a priority if bias were much larger than noise.
predictive judgments,
maximum accuracy (the least bias)
precision (the leas...
This highlight has been truncated due to consecutive passage length restrictions.
The error equation does not apply to evalua...
This highlight has been truncated due to consecutive passage length restrictions.
For a company that makes elevators, for example, the consequences of errors in estimating the maximum load of an elevator are obviously asymmetrical: underestimation is costly, but overestimation could be catastrophic.