More on this book
Community
Kindle Notes & Highlights
The analogy also reflects the interactions between causal factors, interactions we cannot specify, so it lets us make a prediction that reflects factors whose existence we do not know and whose properties we do not know. That is the power of analogical reasoning.
By using analogues, we are tapping into the same source of power for stories. We are applying an informal experiment, using a prior case with a known outcome and a semi-known set of causes to make predictions about a new case.
the logic of reasoning by analogy is similar to the logic of an experiment: to draw a conclusion without having to know all of the important factors operating.
Analogical predictions are most helpful when there is a good database but not enough information to apply more rigorous analyses.
We assume that living in a shared culture will provide us a basis of common referents. If we have to labor at breaking out all of the assumptions behind requests, teamwork and cooperation would become almost impossible.
If we can work with people who understand the culture, the task, and what we are trying to accomplish, then we can trust them to read our minds and fill in unspecified details. A team that has much experience working together can outperform a newly assembled team.
When you communicate intent, you are letting the other team members operate more independently and improvise as necessary. You are giving them a basis for reading your mind more accurately.
The purpose of the task (the higher-level goals). The objective of the task (an image of the desired outcome). The sequence of steps in the plan. The rationale for the plan. The key decisions that may have to be made. Antigoals (unwanted outcomes). Constraints and other considerations. All seven types of information are not always necessary. Instead, this list can be used as a checklist, to determine if there are any more details to add.
When your goals are ill defined (you are not clear what outcome you seek), you might give more attention to the key decision points because your image of the outcome might change depending on how the plan goes.
In teams that work together well, the people carrying out the assignments seem to be able to imagine what the leaders and planners were thinking. The people in the field can carry out the plan without needing continual direction from the higher authorities. The better they can do this, the more decisively they can act. It also helps to have common experience, to have worked with the leader enough to anticipate his or her reactions.
To improve the ability to communicate intent, we cannot try to teach a checklist or set of procedures. A more valuable approach is to set up exercises to provide feedback to leaders about how well their intent is understood.
Something like the STICC model alone is not sufficient. Practice and experience and feedback is requored.
We arrange for team leaders to describe their intent. Then the person running the exercise identifies an unexpected event that might occur. The leader writes down how he or she expects the subordinates to react, and at the same time the subordinates write down how they think they are supposed to react. Then we compare notes.
there is no difficulty in studying the consciousness of a team; you just watch and listen. The matters the team talks about, even the gestures that team members make to each other, are the contents of the team’s consciousness. We can call this the collective consciousness of the team.
What about shared mental models. I'd also call that a part of the shared consciousness but it is not observable.
Experienced teams have integrated identities; the members identify themselves in relationship to the whole team. Inexperienced teams have fragmentary identities and focus on individual assignments more than team requirements.6
The ability to manage the flow of ideas is one of the central skills that distinguishes immature from experienced teams. Members of an immature team may struggle to come up with any ideas, and often the ideas take the team in all sorts of directions—some useful, some irrelevant. Time winds down, and too quickly the team has to make sense of everything that was said. The team members are so excited to tell everything they know that they do not pay attention to whether their comments fit into their task. Experienced teams are more careful; they try to build on the comments of others and create
...more
they realize their problem is that they are fighting fires, yet their job is to put fires out. And they simply aren’t putting any fires out. They decide to stop fighting every fire in the state. They list all the fires, and select the one that will be easiest to put out with available resources. Then they move to the next easiest fire, and so on.
Research in neurophysiology has shown that individuals can have the delusion that they are controlling their own thinking when this is not the case.
If we try to predefine the basic elements, we must either work with an artificial or narrow task, or run the risk of distorting the situation to make it fit into the so-called basic elements.
.c2
Relevant to e.g. Career Framework. What I've tried to capture as the descriptive not prescriptive bit. Or that models used for control limit the complexity of the real thing.
Poor outcomes are different from poor decisions. The best decision possible given the knowledge available can still turn out unhappily.
improved data collection will likely transform into faster decision cycles. By way of analogy, when radar was introduced into commercial shipping, it was for the intent of improving safety so that ships could avoid collisions when visibility was poor. The actual impact was that ships increased their speed, and accident rates stayed constant. On the decision front, we expect to find the same thing. Planning cycles will be expedited, and the plans will be made with the same level of uncertainty as there was before.
This sounds similar olr relates to the economic effect of efficiencies increasing demand. There was some example with steel production, I think, where new efficiency was predicted to lower the price but it increased the demand which again increased thhe price, or something.
Highly successful commanders seem to appreciate the vagaries of chance and do not waste time worrying about details that will not matter. The inference we draw is that although uncertainty is and will be inevitable, it is possible to maintain effective decision making in the face of it.
Often we believe that we can improve the decision by collecting more information, but in the process we lose opportunities. Skilled decision makers appear to know when to wait and when to act. Most important, they accept the need to act despite uncertainty.
If managers find themselves having success—getting projects completed on schedule and under budget—does that success stem from their own skills, the skills of their subordinates, temporary good luck, interventions of higher-level administrators, a blend of these factors, or some other causes altogether? There is no easy way to tell. We can learn the wrong lessons from experience. Each time we compile a story about an experience, we run the risk of getting it wrong and stamping in the wrong strategy.
I think Getting The Most Out of People book suggests making similar interventions in different situations at different times to get some degree of confidence in asessing the outcomes. Taking an experimental approach can be helpful in general but can also be prone to confirmation bias.
When politicians ask to be reelected because of their experience, they are referring to the efficiency with which they do their job, not their growing wisdom in judging which laws to propose and support.
The feedback loop for networking and day to day politics is quick so experience leads to expertise. Governance in generl has terrible feedback loops so experience cannot be guaranteed to lead to expertise.
In a domain such as fighting fires, caring for hospitalized infants, or flying an airplane, expertise can be gained. In other domains, such as selecting stocks, making public policy, or raising a child, the time delays are long and the feedback is uncertain. Jim Shanteau (1992) has suggested that we will not build up real expertise when: The domain is dynamic. We have to predict human behavior. We have less chance for feedback. The task does not have enough repetition to build a sense of typicality. We have fewer trials. Under these conditions, we should be cautious about assuming that
...more
Lia Di Bello has studied the way people in organizations learn different kinds of complex skills.7 She found that she could distinguish competent workers, who had mastered the routines, from the real experts. If she gave people a task that violated the rules they had been using, the experts would quickly notice the violation and find a way to work around it. They could improvise to achieve the desired goal.
Relates to process.description being just a model. And as such, limited. Processes are generally.created by experts but cannot capture the entirety of the expertise.
The de minimus error may arise from using mental simulation to explain away cues that are early warnings of a problem. One exercise to correct this tendency is to use the crystal ball technique discussed in chapter 5. The idea is that you can look at the situation, pretend that a crystal ball has shown that your explanation is wrong, and try to come up with a different explanation. Each time you stretch for a new explanation, you are likely to consider more factors, more nuances. This should reduce fixation on a single explanation. The crystal ball method is not well suited for time-pressured
...more
Since defenses in depth do not seem to work, Rasmussen suggests a different approach: instead of erecting defenses, accept malfunctions and errors, and make their existence more visible. We can try to design better human-system interfaces that let the system operators quickly notice that something is going wrong and form diagnoses and reactions. Instead of trusting the systems (and, by extension, the cleverness of the design engineers) we can trust the competence of the operators and make sure they have the tools to maintain situation awareness throughout the incident.
Experience counts. This sounds so obvious that we should not have to waste time stating it. Yet most studies in decision making use subjects who are inexperienced in the task they are performing, and most advice assumes an inexperienced audience. The different sources of power covered in this book are ways of drawing on experience.
One of my goals for this book was to change the way you see the events around you, even if just for a short time. The book has presented many arguments. But arguments can be refuted. I want to have an impact on your perceptions.
I know I've started explicity thinling of some decision situations as a matter of simulation plus assessment of viability instead of a comparison of options. It's definitely had an impact.
Also relates to my observation that the most impactful books I've read, I may not like them initially but they can still sognificantly affect the way I see the world.