The Field Guide to Understanding 'Human Error'
Rate it:
Open Preview
Read between August 8 - September 25, 2018
21%
Flag icon
This is the illusion of cause–consequence equivalence: we tend to believe in a fair world, where causes and effects are proportional.
59%
Flag icon
The more complex a system, the more difficult it becomes to control.6 More complexity typically leads to a system that is difficult to see through—it is opaque, full of unexpected interactions and interconnections. These can play up in ways that human beings cannot keep up with. • The more complex a system, the more difficult it becomes for people to even know whether they still have adequate control or not.18 With lots of parts and interdependencies responsible for ensuring safety, it may not be easy to see how some are not living up to expectations, how some may have eroded.
80%
Flag icon
Monitoring of safety monitoring (or meta-monitoring). Does your organization or team invest in an awareness of the models of risk it embodies in its safety strategies and risk countermeasures? Is it interested to find out how it may have been ill-calibrated all along, and does it acknowledge that it needs to monitor how it actually monitors safety? This is important if your organization or team wants to avoid stale coping mechanisms, misplaced confidence in how it regulates or checks safety, and does not want to miss new possible pathways to failure. • Do not take past success as guarantee of ...more
This highlight has been truncated due to consecutive passage length restrictions.
84%
Flag icon
If you are a manager or supervisor, you cannot expect your employees to be more committed to safety than you yourself are, or appear to be.
84%
Flag icon
Do not expect that you can hold people accountable for their errors if you did not give them enough authority to live up to the responsibility you expect of them.
90%
Flag icon
Learning from the past is a matter of abstracting away from its contextual mess, ambiguities and indeterminacies, to begin to see conceptual regularities. These we can take into the future. Automation surprises, error-intolerance—such concepts allow us to explain, predict, to help guide design, to perhaps prevent.