More on this book
Community
Kindle Notes & Highlights
Data are necessary to convince people, but stories are what they carry with them. Even in an era of Big Data, stories have a unique potential for conveying understanding.
Instead of trying to show how people do not measure up to ideal strategies for performing tasks, we have been motivated by curiosity about how people do so well under difficult conditions.
Core difference between earlier decision making research and this investigation of natural decision making. I think game theory for example is also in the same rational group vs this. Also the core difference between sensemaking and decision making perspectives.
The conventional sources of power include deductive logical thinking, analysis of probabilities, and statistical methods.3 Yet the sources of power that are needed in natural settings are usually not analytical at all—the power of intuition, mental simulation, metaphor, and storytelling.
By focusing on the nonroutine cases, we were asking them about the most interesting ones—the ones they come back to the station house and tell everybody else about. We were asking for their best stories, and they were happy to oblige.
We treated each critical incident as a story and made the interview flow around the storytelling of the commanders. This method enabled us to get at the context of their decision making. It also ensured their interest and participation, because they enjoyed relating their experiences.
People who are good at what they do relish the chance to explain it to an appreciative audience.
there are times for deliberating about options. Usually these are times when experience is inadequate and logical thinking is a substitute for recognizing a situation as typical.
The recognition-primed decision (RPD) model fuses two processes: the way decision makers size up the situation to recognize which course of action makes sense, and the way they evaluate that course of action by imagining it.
Furthermore, if we cannot trust someone to make a big judgment, such as which option is best, why would we trust all of the little judgments that go into the rational choice strategy?
Argument against using a more comprehensive and rigorous "rational" decision. I wonder what this insight means for e.g. promotion and hire evaluations. Does it mean that novices using rigorous methods should not be trusted more than they should be trusted with non-rigorous methods. I've seen some support for that in my experience where people don't k now how to interpret the large amount of small datapoints or if there is a formulaic aggregate formed from the small datapoints then they are not able to notice when the results don't make sense or when some parts of the result should be discounted or emphasozed.
If we can present many situations an hour, several hours a day, for days or weeks, we should be able to improve the trainee’s ability to detect familiar patterns. The design of the scenarios is critical, since the goal is to show many common cases to facilitate a recognition of typicality along with different types of rare cases so trainees will be prepared for these as well.
Support to suggestion from accelerated expertise to build time-squeezed cases and train by working through these cases.
By imagining the option being carried out, they can spot weaknesses and find ways to avoid these, thereby making the option stronger. Conventional models just select the best, without seeing how it can be improved.
Important part of the simulation are the iterative improvements as you repatedly play through the simulation.
The emphasis is on being poised to act rather than being paralyzed until all the evaluations have been completed.
Commanders often check to see if they might be wrong rather than showing confirmation bias (seeking only information that supports their beliefs). Laboratory studies often find that naive subjects show confirmation bias, but Shanteau (1992) has found that experienced decision makers do not fall prey to confirmation bias. Rather, they search for evidence that would be incompatible with their interpretations.
Mental simulation about the past can be used to explain the present. It can also be used for predicting the future from the present.
Of course, there are ways of avoiding the constraints. If we have a lot of familiarity in the area, we can chunk several transitions into one unit. In addition, we can save memory space by treating a sequence of steps as one unit rather than representing all the steps. We can use our expertise to find the right level of abstraction.
.c3
This grouping is a common way in which expertise shows. Here it shows why perhaps experts arde able to evaluate options more quickly. For a novice chess player, a simulation is in individual moves. For an expert it is more about the shape of the board and abstract concepts of strengths etc.
This is important to note for understanding that experience doesn't only allow the expert to think of a good solution. The experience is also crucial for evaluating the solution by simulating how it would play out.
without a sufficient amount of expertise and background knowledge, it may be difficult or impossible to build a mental simulation.
Mental simulation takes effort. Using it is different from looking at a situation and knowing what is happening. Mental simulation is needed when you are not sure what is happening so you have to puzzle it out.
If we are trying to repair a piece of equipment and keep testing it to find out what is wrong, we have a lot of trouble if more than one thing is broken. Once we find one fault, we may be tempted to attribute all the symptoms to it and miss the other fault, so we fix just the problem we know about, and the result is that the equipment still does not work.
I've experienced this many times when I've been digging through logs and find something that is off. It's easy to attribute a bigger impact to the finding than may really be warranted.
We will not be motivated to assemble an alternate simulation until there is too much to be explained away. The strategy makes sense. The problem is that we lose track of how much contrary evidence we have explained away so the usual alarms do not go off. This has also been called the garden path fallacy: taking one step that seems very straightforward, and then another, and each step makes so much sense that you do not notice how far you are getting from the main road.
I think this is what the knokwledge shields mentioned in Accelerated Expertise are referring to. This tendency to stick to a theory and explain away conflicting evidence. One or two coincidences may be feasible to accommodate for but it's easy to pile on more until the entire thing becomes very unlikely.
You can ask them to review the plan for flaws, but such an inspection may be halfhearted since the planners really want to believe that the plan lacks flaws. We devised an exercise to take them out of the perspective of defending their plan and shielding themselves from flaws. We tried to give them a perspective where they would be actively searching for flaws in their own plan. This is the premortem exercise: the use of mental simulation to find the flaws in a plan.
Interesting. I've used the technique. Did it really originate from this work? Wise way to drop down the knowledge shields that would normally explain away conflicting data. Cam be applied for simulations into the future as the name premortem implies but could also be used for simulations that try to explain the past.
If we continue with our scenario, that the Vincennes had not fired and had been attacked by an F-14, the decision researchers would have still claimed that it was a dear case of bias, except this time the bias would have been to ignore the base rates, to ignore the expectancies. No one can win. If you act on expectancies and you are wrong, you are guilty of expectancy bias. If you ignore expectancies and are wrong, you are guilty of ignoring base rates and expectancies. This means that the decision bias approach explains too much (Klein, 1989). If an appeal to decision bias can explain
...more
Mental simulation shows up in at least three places in the RPD model: diagnosing to form situation awareness, generating expectancies to help verify situation awareness, and evaluating a course of action.
The middle one is least intuitive. "If my understanding is correct then we should see X next." This helps validate understanding and is common in time critical situations. It may not be communicated commonly in those situations though, which is why the first C in STICC model is a great reminder to communicate this information.
Pennington and Hastie call this a story model, because the reasoning strategy is to build and evaluate different stories about why people acted in the ways they did.
.c3
Hypothetical stories are a form of simulation. Relates to the benefit of storytelling in incident reveiews and our tendency to create narratives for our own actions so they would make sense. To rationalize. The latter relates more closely.
The RPD model describes how people can make decisions without comparing options, but the RPD model does not describe the only strategy people use in naturalistic settings. Even with time pressure, there will be times when you may need to compare different options.
There are times to use comparative strategies and times to use singular evaluation strategies such as the one described by the RPD model.
The characteristics listed above suggest that comparative e valuatiom may be reasonable for hiring as suggest in What Works. There's a need for justification and there is time for more computational complexity than the split-second decision required in incident response, for example.
Could potentially consider this as support for the article on singular evaluation -> comparative evaluation -> continuous comparative evaluation against prior data.
But even when decision makers are comparing options and trying to find the best one, they may not be using rational choice strategies such as assessing each option on a common set of criteria. The process may be more like running a mental simulation of each course of action and comparing the emotional reactions—the discomfort or worry or enthusiasm—that each option produces when it is imagined.
we will be more likely to compare options when faced with unfamiliar situations.
For a detailed discussion of the applications of the NDM framework, Flin (1996) provides an excellent survey of the implications of NDM research for decision support system design, training, and personnel selection for critical incident managers such as police officers, firefighters, and offshore oil installation managers.
the recognitional strategies that take advantage of experience are generally successful, not as a substitute for the analytical methods, but as an improvement on them. The analytical methods are not the ideal; they are the fallback for those without enough experience to know what to do.
Means, Salas, Crandall, and Jacobs (1993) reviewed the literature on the effectiveness of such analytical decision training and found that the results have been disappointing. Johnson, Driskell, and Salas (1997) have collected data showing that subjects did better using unsystematic strategies than analytical strategies for identifying and comparing options.
Standard advice of listing pros and cons and a through evaluation might be good as a reminder to collect data but not for making the actual decision.
When options are very close together in value, we can call this a zone of indifference: the closer together the advantages and disadvantages of competing options, the harder it will be to make a decision but the less it will matter.5 For these situations, it is probably a waste of time to try to make the best decision. If we can sense that we are within this zone of indifference, we should make the choice any way we can and move on to other matters.
Important to keep in mind when working with limited data. May not be worth the extra effort of further evaluation. A reason why the time for thinking and time for doing model of leadership is language is both effective and important to keep in mind. Also relates to disagree and commit.