Noise: A Flaw in Human Judgment
Rate it:
Open Preview
Kindle Notes & Highlights
Read between April 6 - July 24, 2024
1%
Flag icon
Bias and noise—systematic deviation and random scatter—are different components of error. The targets illustrate the difference.
1%
Flag icon
Some judgments are biased; they are systematically off target. Other judgments are noisy, as people who are expected to agree end up at very different points around the target. Many organizations, unfortunately, are afflicted by both bias and noise.
2%
Flag icon
A general property of noise is that you can recognize and measure it while knowing nothing about the target or bias.
2%
Flag icon
Wherever you look at human judgments, you are likely to find noise. To improve the quality of our judgments, we need to overcome noise as well as bias.
3%
Flag icon
wherever there is judgment, there is noise—and more of it than you think.
6%
Flag icon
A defining feature of system noise is that it is unwanted, and we should stress right here that variability in judgments is not always unwanted. Consider matters of preference or taste. If ten film critics watch the same movie, if ten wine tasters rate the same wine, or if ten people read the same novel, we do not expect them to have the same opinion. Diversity of tastes is welcome and entirely expected.
6%
Flag icon
Variability in judgments is also expected and welcome in a competitive situation in which the best judgments will be rewarded. When several companies (or several teams in the same organization) compete to generate innovative solutions to the same customer problem, we don’t want them to focus on the same approach. The same is true when multiple teams of researchers attack a scientific problem, such as the development of a vaccine: we very much want them to look at it from different angles.
7%
Flag icon
Most of us, most of the time, live with the unquestioned belief that the world looks as it does because that’s the way it is. There is one small step from this belief to another: “Other people view the world much the way I do.” These beliefs, which have been called naive realism, are essential to the sense of a reality we share with other people. We rarely question these beliefs. We hold a single interpretation of the world around us at any one time, and we normally invest little effort in generating plausible alternatives to it. One interpretation is enough, and we experience it as true. We ...more
8%
Flag icon
Speaking of Singular Decisions “The way you approach this unusual opportunity exposes you to noise.” “Remember: a singular decision is a recurrent decision that is made only once.” “The personal experiences that made you who you are are not truly relevant to this decision.”
9%
Flag icon
Judgment can therefore be described as measurement in which the instrument is a human mind. Implicit in the notion of measurement is the goal of accuracy—to approach truth and minimize error. The goal of judgment is not to impress, not to take a stand, not to persuade.
9%
Flag icon
The variability you could not control is an instance of noise.
9%
Flag icon
like a measuring instrument, the human mind is imperfect—it is both biased and noisy.
10%
Flag icon
Selective attention and selective recall are a source of variability across people.
11%
Flag icon
Focusing on the process of judgment, rather than its outcome, makes it possible to evaluate the quality of judgments that are not verifiable, such as judgments about fictitious problems or long-term forecasts. We may not be able to compare them to a known outcome, but we can still tell whether they have been made incorrectly. And when we turn to the question of improving judgments rather than just evaluating them, we will focus on process, too.
13%
Flag icon
FIGURE 7: Two decompositions of MSE As the mathematical expression and its visual representation both suggest, bias and noise play identical roles in the error equation. They are independent of each other and equally weighted in the determination of overall error.
18%
Flag icon
the wisdom-of-crowds effect: averaging the independent judgments of different people generally improves accuracy.
18%
Flag icon
Of course, if questions are so difficult that only experts can come close to the answer, crowds will not necessarily be very accurate. But when, for instance, people are asked to guess the number of jelly beans in a transparent jar, to predict the temperature in their city one week out, or to estimate the distance between two cities in a state, the average answer of a large number of people is likely to be close to the truth.
18%
Flag icon
The reason is basic statistics: averaging several independent judgments (or measurements) yields a new judgment, which is less noisy, albeit not less biased, than the individual judgments.
18%
Flag icon
can you get closer to the truth by combining two guesses from the same person, just as you do when you combine the guesses of different people? As they discovered, the answer is yes. Vul and Pashler gave this finding an evocative name: the crowd within.
18%
Flag icon
“You can gain about 1/10th as much from asking yourself the same question twice as you can from getting a second opinion from someone else.” This is not a large improvement. But you can make the effect much larger by waiting to make a second guess.
18%
Flag icon
this result certainly provides a rationale for the age-old advice to decision makers: “Sleep on it, and think again in the morning.”
18%
Flag icon
if you can get independent opinions from others, do it—this real wisdom of crowds is highly likely to improve your judgment. If you cannot, make the same judgment yourself a second time to create an “inner crowd.” You can do this either after some time has passed—giving yourself distance from your first opinion—or by actively trying to argue against yourself to find another perspective on the problem. Finally, regardless of the type of crowd, unless you have very strong reasons to put more weight on one of the estimates, your best bet is to average them.
18%
Flag icon
mood has a measurable influence on what you think: what you notice in your environment, what you retrieve from your memory, how you make sense of these signals. But mood has another, more surprising effect: it also changes how you think.
18%
Flag icon
The costs and benefits of different moods are situation-specific.
19%
Flag icon
People who are in a good mood are more likely to let their biases affect their thinking.
19%
Flag icon
The propensity to find meaning in such statements is a trait known as bullshit receptivity. (Bullshit has become something of a technical term since Harry Frankfurt, a philosopher at Princeton University, published an insightful book, On Bullshit, in which he distinguished bullshit from other types of misrepresentation.)
19%
Flag icon
Inducing good moods makes people more receptive to bullshit and more gullible in general; they are less apt to detect deception or identify misleading information. Conversely, eyewitnesses who are exposed to misleading information are better able to disregard it—and to avoid false testimony—when they are in a bad mood.
19%
Flag icon
truth: you are not the same person at all times. As your mood varies (something you are, of course, aware of), some features of your cognitive machinery vary with it (something you are not fully aware of).
19%
Flag icon
In short, you are noisy.
19%
Flag icon
gambler’s fallacy: we tend to underestimate the likelihood that streaks will occur by chance.
19%
Flag icon
Or to put it differently, you are not always the same person, and you are less consistent over time than you think. But somewhat reassuringly, you are more similar to yourself yesterday than you are to another person today.
20%
Flag icon
These findings suggest that memory performance is driven in large part by, in Kahana and coauthors’ words, “the efficiency of endogenous neural processes that govern memory function.” In other words, the moment-to-moment variability in the efficacy of the brain is not just driven by external influences, like the weather or a distracting intervention. It is a characteristic of the way our brain itself functions.
20%
Flag icon
It is very likely that intrinsic variability in the functioning of the brain also affects the quality of our judgments in ways that we cannot possibly hope to control. This variability in brain function should give pause to anyone who thinks occasion noise can be eliminated.
20%
Flag icon
“Although you may not be the same person you were last week, you are less different from the ‘you’ of last week than you are from someone else today. Occasion noise is not the largest source of system noise.”
21%
Flag icon
crowds were indeed wise as long as they registered their views independently. But if they learned the estimates of other people—for example, the average estimate of a group of twelve—the crowd did worse.
21%
Flag icon
As the authors put it, social influences are a problem because they reduce “group diversity without diminishing the collective error.” The irony is that while multiple independent opinions, properly aggregated, can be strikingly accurate, even a little social influence can produce a kind of herding that undermines the wisdom of crowds.
23%
Flag icon
“Everything seems to depend on early popularity. We’d better work hard to make sure that our new release has a terrific first week.” “As I always suspected, ideas about politics and economics are a lot like movie stars. If people think that other people like them, such ideas can go far.” “I’ve always been worried that when my team gets together, we end up confident and unified—and firmly committed to the course of action that we choose. I guess there’s something in our internal processes that isn’t going all that well!”
25%
Flag icon
The findings support a blunt conclusion: simple models beat humans.
25%
Flag icon
people are inferior to statistical models in many ways. One of their critical weaknesses is that they are noisy.
25%
Flag icon
in short, when we make judgments that are not reducible to a plain operation of weighted averaging. The model-of-the-judge studies reinforce Meehl’s conclusion that the subtlety is largely wasted. Complexity and richness do not generally lead to more accurate predictions.
26%
Flag icon
The effect of removing noise from your judgments will always be an improvement of your predictive accuracy.
26%
Flag icon
The robust finding that the model of the judge is more valid than the judge conveys an important message: the gains from subtle rules in human judgment—when they exist—are generally not sufficient to compensate for the detrimental effects of noise. You may believe that you are subtler, more insightful, and more nuanced than the linear caricature of your thinking. But in fact, you are mostly noisier.
26%
Flag icon
“People believe they capture complexity and add subtlety when they make judgments. But the complexity and the subtlety are mostly wasted—usually they do not add to the accuracy of simple models.”
26%
Flag icon
“There is so much noise in judgment that a noise-free model of a judge achieves more accurate predictions than the actual judge does.”
29%
Flag icon
More often, people are willing to give an algorithm a chance but stop trusting it as soon as they see that it makes mistakes. On one level, this reaction seems sensible: why bother with an algorithm you can’t trust? As humans, we are keenly aware that we make mistakes, but that is a privilege we are not prepared to share. We expect machines to be perfect. If this expectation is violated, we discard them.
29%
Flag icon
Because of this intuitive expectation, however, people are likely to distrust algorithms and keep using their judgment, even when this choice produces demonstrably inferior results. This attitude is deeply rooted and unlikely to change until near-perfect predictive accuracy can be achieved.
29%
Flag icon
“When there is a lot of data, machine-learning algorithms will do better than humans and better than simple models. But even the simplest rules and algorithms have big advantages over human judges: they are free of noise, and they do not attempt to apply complex, usually invalid insights about the predictors.” “Since we lack data about the outcome we must predict, why don’t we use an equal-weight model? It will do almost as well as a proper model, and will surely do better than case-by-case human judgment.” “You disagree with the model’s forecast. I get it. But is there a broken leg here, or ...more
30%
Flag icon
Both intractable uncertainty (what cannot possibly be known) and imperfect information (what could be known but isn’t) make perfect prediction impossible. These unknowns are not problems of bias or noise in your judgment; they are objective characteristics of the task. This objective ignorance of important unknowns severely limits achievable accuracy.
30%
Flag icon
We take a terminological liberty here, replacing the commonly used uncertainty with ignorance. This term helps limit the risk of confusion between uncertainty, which is about the world and the future, and noise, which is variability in judgments that should be identical.
30%
Flag icon
Overconfidence is one of the best-documented cognitive biases.
« Prev 1 3