More on this book
Community
Kindle Notes & Highlights
Different practitioners focused on different cues. Some watched for breathing problems while others monitored how soon the baby cried. Without a standardized procedure, danger signs were often missed, and many newborn infants died.
One day over breakfast, a medical resident asked how Dr. Apgar would make a systematic assessment of a newborn. “That’s easy,” she replied. “You would do it like this.” Apgar jotted down five variables (heart rate, respiration, reflex, muscle tone, and color) and three scores (0, 1, or 2, depending on the robustness of each sign). Realizing that she might have made a breakthrough that any delivery room could implement, Apgar began rating infants by this rule one minute after they were born.
A baby with a total score of 8 or above was likely to be pink, squirming, crying, grimacing, with a pulse...
This highlight has been truncated due to consecutive passage length restrictions.
A baby with a score of 4 or below was probably bluish, flaccid, passive, with a slow or weak pulse—in n...
This highlight has been truncated due to consecutive passage length restrictions.
Atul Gawande’s recent A Checklist Manifesto provides many other examples of the virtues of checklists and simple rules.
The tasks at which they fail typically require long-term predictions about the patient’s future.
The statistical method, Meehl wrote, was criticized by experienced clinicians as “mechanical, atomistic, additive, cut and dried, artificial, unreal, arbitrary, incomplete, dead, pedantic, fractionated, trivial,
forced, static, superficial, rigid, sterile, academic, pseudoscientific and blind.”
The clinical method, on the other hand, was lauded by its proponents as “dynamic, global, meaningful, holistic, subtle, sympathetic, configural, patterned, organized, rich, deep, genuine, sensitive, sophisticated, real, l...
This highlight has been truncated due to consecutive passage length restrictions.
Their rational argument is compelling, but it runs against a stubborn psychological reality: for most people,
the cause of a mistake matters.
The story of a child dying because an algorithm made a mistake is more poignant than the story of the same tragedy occurring as a result of human error, and the difference in emotional inte...
This highlight has been truncated due to consecutive passage length restrictions.
Looking for books or music we might enjoy, we appreciate recommendations generated by software. We take it for granted that decisions about credit limits are made without the direct intervention of any human judgment. We are increasingly exposed to guidelines that have the form of simple algorithms, such as the ratio of good and bad cholesterol levels we should strive to attain.
The expanding list of tasks that are assigned to algorithms should eventually reduce the discomfort that most people feel when they first encounter the pattern of results that Meehl described in his disturbing little book.
The goal was to assign the recruit a score of general fitness for combat and to find the best match of his personality among various branches: infantry, artillery, armor, and so on.
I was instructed to design an interview that would be more useful but would not take more time. I was also told to try out the new interview and to evaluate its accuracy.
I concluded that the then current interview had failed at least in part because it allowed the interviewers to do what they found most interesting, which was to learn about the dynamics of the interviewee’s mental life.
Meehl’s book suggested that such evaluations should not be trusted and that statistical summaries of separately evaluated attributes would achieve higher validity.
I made up a list of six characteristics that appeared relevant to performance in a combat unit, including “responsibility,” “sociability,” and “masculine pride.” I then composed, for each trait, a series of factual questions about the individual’s life before his enlistment,
The idea was to evaluate as objectively as possible how well the recruit had done on each dimension.
By focusing on standardized, factual questions, I hoped to combat the halo effect, where favorable first impressions influence later judgments.
As a further precaution against halos, I instructed the interviewers to go through the six traits in a fixed sequence, rating each trait on a five...
This highlight has been truncated due to consecutive passage length restrictions.
Their only task was to elicit relevant facts about his past and to use that information to score each personality dimension.
Several hundred interviews were conducted by this new method, and a few months later we collected evaluations of the soldiers’ performance from the commanding officers of the units to which they had been assigned. The results made us happy.
I learned from this finding a lesson that I have never forgotten: intuition adds value even in the justly derided selection interview, but only after a disciplined collection of objective information and disciplined scoring of separate traits.
First, select a few traits that are prerequisites for success in this position (technical proficiency, engaging personality, reliability, and so on). Don’t overdo it—six dimensions is a good number. The traits you choose should be as independent as possible from each other, and you should feel that you can assess them reliably by asking a few factual questions.
make a list of those questions for each trait and think about how you will score it, say on a 1–5 scale.
To avoid halo effects, you must collect the information on one trait at a time, scoring each before you move on to the next one.
Because you are in charge of the final decision, you should not do a “close your eyes.” Firmly resolve that you will hire the candidate whose final score is the highest, even if there is another one whom you like better—try to resist your wish to invent broken legs to change the ranking.
In search of another way to deal with disagreements, I have engaged in a few “adversarial collaborations,” in which scholars who disagree on the science agree to write a jointly authored paper on their differences, and sometimes conduct research together. In especially tense situations, the research is moderated by an arbiter.
They call themselves students of Naturalistic Decision Making, or NDM, and mostly work in organizations where they often study how experts work.
They criticize this model as overly concerned with failures and driven by artificial experiments rather than by the study of real people doing things that matter. They are deeply skeptical about the value of using rigid algorithms to replace human judgment, and Paul Meehl is not among their heroes. Gary Klein has eloquently articulated this position over many years.
paper he wrote in the 1970s, and was impressed by his book Sources of Power, much of which analyzes how experienced professionals develop intuitive skills.
When can you trust an experienced professional who claims to have an intuition?
Gladwell’s book opens with the memorable story of art experts faced with an object that is described as a magnificent example of a kouros, a sculpture of a striding boy.
they felt in their gut that the statue was a fake but were not able to articulate what it was about it that made them uneasy.
The experts agreed that they knew the sculpture was a fake without knowing how they knew—the very definition of intuition.
In a later chapter he describes a massive failure of intuition: Americans elected President Harding, whose
only qualification for the position was that he perfectly looked the part. Square jawed and tall, he was the perfect image of a strong and decisive leader.
An intuitive prediction of how Harding would perform as president arose from substituting one question for another.
The early experiences that shaped Klein’s views of intuition were starkly different from mine. My thinking was formed by observing the illusion of validity in myself and by reading Paul Meehl’s demonstrations of the
inferiority of clinical prediction.
Klein’s views were shaped by his early studies of fireground commanders (the leader...
This highlight has been truncated due to consecutive passage length restrictions.
The initial hypothesis was that commanders would restrict their analysis to only a pair of options, but that hypothesis proved to be incorrect.
In fact, the commanders usually generated only a single option, and that was all they needed.
If the course of action they were considering seemed appropriate, they would implement it.
If it had shortcomings, they would modify it.
If they could not easily modify it, they would turn to the next most plausible option and run through the same procedure until an a...
This highlight has been truncated due to consecutive passage length restrictions.
Klein elaborated this description into a theory of decision making that he called the recognition...
This highlight has been truncated due to consecutive passage length restrictions.
The process involves both System 1 ...
This highlight has been truncated due to consecutive passage length restrictions.

