More on this book
Community
Kindle Notes & Highlights
Read between
November 16 - November 17, 2022
Two American college students in the United States might look at each other and see a total stranger; the same two college students on their junior year abroad in Togo might find that they are surprisingly similar: They’re both Americans!
changing the context in which two things are compared, you submerge certain features and force others to the surface.
“It is generally assumed that classifications are determined by similarities among the objects,” wrote Amos, before offering up an opposing view: that “the similarity of objects is modified by the manner in which they are classified. Thus, similarity has two faces: causal and derivative. It serves as a basis for the classification of objects, but is also influenced by the adopted classification.” A ban...
This highlight has been truncated due to consecutive passage length restrictions.
Things are grouped together for a reason, but, once they are grouped, their grouping causes them to seem more like each other than they otherwise would. That is, the mere act of classification reinforces stereotypes. If you w...
This highlight has been truncated due to consecutive passage length restrictions.
“Danny said to me, ‘It’s okay, just learn the books.’ And I said, ‘What do you mean, just learn the books?’ And he said, ‘Take the books with you and memorize them.’” And so that’s what Avi had done. He returned to Danny’s classroom just in time for the final exam. He’d memorized the books.
“He even asked me, ‘I’m still the same man, right?’” It was obvious to Avi, and to everyone else but Danny, that the student was a fool. “Danny was the best teacher at Hebrew University,” said Avi, “but it was very hard to convince him that the review didn’t matter—that he was excellent.”
Danny’s volatility was a weakness and, less obviously, also a strength. It led him, almost inadvertently, to broaden himself. It turned out that Danny never really had to decide what kind of psychologist he would be. He could be, and would be, many different kinds of psychologists. At the same time that he was losing
(A wave of anxiety had swept the United States in the late 1950s, thanks to a book by Vance Packard, called The Hidden Persuaders, about the power of advertising to warp people’s decisions by influencing them subconsciously.
He later confessed he’d made it all up.)
Cherry, a cognitive scientist, had identified what became known as the “cocktail party effect.” The cocktail party effect was the ability of people to filter a lot of noise for the sounds they wished to hear—as they did when they listened to someone at a cocktail party.
the most effective way to teach people longer strings of information was to feed the information into their minds in smaller chunks. To this, Shapira recalled, Danny added his own twist. “He says you only tell them a few things—and get them to sing it.” Danny loved the idea of the “action song.” In his statistics classes he had actually asked his students to sing the formulas. “He forced you to engage with problems,”
“Someone once said that education was knowing what to do when you don’t know,” said one of his students. “Danny took that idea and ran with it.”
Danny’s students left every class with a sense that there was really no end to the problems in this world. Danny found problems where none seemed to exist; it was as if he structured the world around him so that it might be understood chiefly as a problem. To each new class the students arrived wondering what problem he might bring for them to solve. Then one day he brought them Amos Tversky.
A lot of psychologists at the time, including Danny, were using sample sizes of 40 subjects, which gave them only a 50 percent chance of accurately reflecting the population. To have a 90 percent chance of capturing the traits of the larger population, the sample size needed to be at least 130. To gather a larger sample of course required a lot more work, and thus slowed a research career.
And so he quit and bought a building in a leafy Eugene neighborhood that had most recently housed a Unitarian church, and renamed it the Oregon Research Institute. A private institution devoted exclusively to the study of human behavior, there was nothing in the world like it, and it soon attracted both curious assignments and unusual people. “Here brainy people, working in the proper atmosphere, go quietly about their task of finding out what makes us tick,” a local Eugene paper reported.
in 1968, in an academic journal called American Psychologist. He began by pointing out the small mountain of research that suggested that expert judgment was less reliable than algorithms. “I can summarize this ever-growing body of literature,” wrote Goldberg, “by pointing out that over a rather large array of clinical judgment tasks (including by now some which were specifically selected to show the clinician at his best and the actuary at his worst), rather simple actuarial formulae typically can be constructed to perform at a level of validity no lower than that of the clinical expert.”
But then UCLA sent back the analyzed data, and the story became unsettling. (Goldberg described the results as “generally terrifying.”) In the first place, the simple model that the researchers had created as their starting point for understanding how doctors rendered their diagnoses proved to be extremely good at predicting the doctors’ diagnoses.
That did not mean that their thinking was necessarily simple, only that it could be captured by a simple model.
“Accuracy on this task was not associated with the amount of professional experience of the judge.”
Still, Goldberg was slow to blame the doctors. Toward the end of his paper, he suggested that the problem might be that doctors and psychiatrists seldom had a fair chance to judge the accuracy of their thinking and, if necessary, change it. What was lacking was “immediate feedback.”
encouraging. “It now appears that our initial formulation of the problem of learning clinical inference was far too simple—that a good deal more than outcome feedback is necessary for judges to learn a task as difficult as this one,” wrote Goldberg. At
The simple algorithm had outperformed not merely the group of doctors; it had outperformed even the single best doctor. You could beat the doctor by replacing him with an equation created by people who knew nothing about medicine and had simply asked a few questions of doctors.
“While he possesses his full share of human learning and hypothesis-generating skills, he lacks the machine’s reliability. He ‘has his days’: Boredom, fatigue, illness, situational and interpersonal distractions all plague him, with the result that his repeated judgments of the exact same stimulus configuration are not identical. . . . If we could remove some of this human unreliability by eliminating this random error in his judgments, we should thereby increase the validity of the resulting predictions . . .”
People predict by making up stories People predict very little and explain everything People live under uncertainty whether they like it or not People believe they can tell the future if they work hard enough People accept any explanation as long as it fits the facts The handwriting was on the wall, it was just the ink that was invisible People often work hard to obtain information they already have And avoid new knowledge Man is a deterministic device thrown into a probabilistic Universe In this match, surprises are expected Everything that has already happened must have been inevitable
Don Redelmeier
“Eighty percent of doctors don’t think probabilities apply to their patients,” he said. “Just like 95 percent of married couples don’t believe the 50 percent divorce rate applies to them, and 95 percent of drunk drivers don’t think the statistics that show that you are more likely to be killed if you are driving drunk than if you are driving sober applies to them.”
“It was such a loss of so many life years,” he said. “It was such a preventable case. And the guy hadn’t been wearing a helmet.” Redelmeier was newly struck by the inability of human beings to judge risks, even when their misjudgment might kill them.
What is it with you freedom-loving Americans? he asked. Live free or die. I don’t get it. I say, “Regulate me gently. I’d rather live.” His fellow student replied, Not only do a lot of Americans not share your view; other physicians don’t share your view.
“Military psychology is alive and well in Israel,” concluded the United States Navy’s reporter on the ground. “It is an interesting question whether or not the psychology of the Israelis is becoming a military one.”
“Cognitive Limitations and Public Decision Making.” It was troubling to consider, he began, “an organism equipped with an affective and hormonal system not much different from that of the jungle rat being given the ability to destroy every living thing by pushing a few buttons.”
The distinction between judgment and decision making appeared as fuzzy as the distinction between judgment and prediction.
But to Amos, as to other mathematical psychologists, they were distinct fields of inquiry. A person making a judgment was assigning odds.
Then he directed Danny to a very long chapter called “Individual Decision Making.”
Daniel Bernoulli.
“expected utility theory,”
He of course knew that people made decisions that the theory would not have predicted. Amos himself had explored how people could be—as the theory assumed they were not—reliably “intransitive.”
If people mostly chose option 1, it was because they sensed the special pain they would experience if they chose option 2 and won nothing. Avoiding that pain became a line item on the inner calculation of their expected utility. Regret was the ham in the back of the deli that caused people to switch from turkey to roast beef.
When they made decisions, people did not seek to maximize utility. They sought to minimize regret.
When choosing between sure things and gambles, people’s desire to avoid loss exceeded their desire to secure gain.
For most people, the happiness involved in receiving a desirable object is smaller than the unhappiness involved in losing the same object.”
“A very different view of man as a decision maker might well have emerged if the outcomes of decisions in the private-personal, political or strategic domains had been as easily measurable as monetary gains and losses,” they wrote.
A loss, according to the theory, was when a person wound up worse off than his “reference point.” But what was this reference point? The easy answer was: wherever you started from. Your status quo. A loss was just when you ended up worse than your status quo. But how did you determine any person’s status quo? “In the experiments it’s pretty clear what a loss is,” Arrow said later. “In the real world it’s not so clear.”
The reference point was a state of mind. Even in straight gambles you could shift a person’s reference point and make a loss seem like a gain, and vice versa. In so doing, you could manipulate the choices people made, simply by the way they were described.
people faced with a risky choice failed to put it in context. They evaluated it in isolation.
Richard Thaler.
as Thaler’s other pronounced trait was a sense of ineptitude. When he was ten or eleven years old, and a B student, his father, a detail-oriented insurance executive, had grown so frustrated with his sloppy schoolwork that he handed his son The Adventures of Tom Sawyer and told him to copy a few pages exactly as Mark Twain had written them. Thaler tried, seriously. “I did it over and over, kicking and screaming.” Each time, his father found errors—missing words, missing commas. The quotation marks in an exchange between Tom and Aunt Polly confounded him. Looking back on it, he could see that
...more
Possibly one of the most relatable childhood descriptions I have found. Except I still lack the accolades.
researcher named J. Allan Hobson. It shouldn’t have been that hard. In a series of famous papers, Hobson had landed body blows on the Freudian idea that dreams arose from unconscious desires, by showing that they actually came from a part of the brain that had nothing to do with desire. He’d proven that the timing and the length of dreams were regular and predictable, which suggested that dreams had less to say about a person’s psychological state than about his nervous system.