More on this book
Community
Kindle Notes & Highlights
by
Matthew Syed
Read between
July 19 - September 2, 2022
a failure to learn from mistakes has been one of the single greatest obstacles to human progress.
It is partly because we are so willing to blame others for their mistakes that we are so keen to conceal our own. We anticipate, with remarkable clarity, how people will react, how they will point the finger, how little time they will take to put themselves in the tough, high-pressure situation in which the error occurred. The net effect is simple: it obliterates openness and spawns cover-ups. It destroys the vital information we need in order to learn.
closed loop is where failure doesn’t lead to progress because information on errors and weaknesses is misinterpreted or ignored; an open loop does lead to progress because the feedback is rationally acted upon).
‘Learn from the mistakes of others. You can’t live long enough to make them all yourself.’
The problem was not a lack of diligence or motivation, but a system insensitive to the limitations of human psychology.
That is one of the ways that closed loops perpetuate: when people don’t interrogate errors, they sometimes don’t even know they have made one (even if they suspect they may have).
“human errors” often emerge from poorly designed systems.
It is about creating systems and cultures that enable organisations to learn from errors, rather than being threatened by them.
In effect, practice is about harnessing the benefits of learning from failure while reducing its cost. It is better to fail in practice in preparation for the big stage than on the big stage itself. This is true of organisations, too, which conduct pilot schemes (and in the case of aviation and other safety critical industries test ideas in simulators) in order to learn, before rolling out new ideas or procedures. The more we can fail in practice, the more we can learn, enabling us to succeed when it really matters.
The observable bullet holes suggested that the area around the cockpit and tail didn’t need reinforcing because it was never hit. In fact, the planes that were hit in these places were crashing because this is where they were most vulnerable.
This is the paradox of success: it is built upon failure.
Everything we know in aviation, every rule in the rule book, every procedure we have, we know because someone somewhere died . . . We have purchased at great cost, lessons literally bought with blood that we have to preserve as institutional knowledge and pass on to succeeding generations. We cannot have the moral failure of forgetting these lessons and have to relearn them.
With pseudosciences the problem is more structural. They have been designed, wittingly or otherwise, to make failure impossible. That is why, to their adherents, they are so mesmerising. They are compatible with everything that happens. But that also means they cannot learn from anything.
Knowledge does not progress merely by gathering confirmatory data, but by looking for contradictory data.
Feedback, when delayed, is considerably less effective in improving intuitive judgement.
This success is not a one-off or a fluke, it is a method.
learning from mistakes has two components. The first is a system.
Mechanisms designed to learn from mistakes are impotent in many contexts if people won’t admit to them. It was only when the mindset of the organisation changed that the system started to deliver amazing results.
One particular problem in healthcare is not just the capacity to learn from mistakes, but also that even when mistakes are detected, the learning opportunities do not flow throughout the system. This is sometimes called the ‘adoption rate’.
When the probability of error is high, the importance of learning from mistakes is more essential, not less.
Systems that do not engage with failure struggle to learn.
When Prophecy Fails,14 they simply redefined the failure.
When we are confronted with evidence that challenges our deeply held beliefs we are more likely to reframe the evidence than we are to alter our beliefs. We simply invent new reasons, new justifications, new explanations. Sometimes we ignore the evidence altogether.
‘Cognitive dissonance’ is the term Festinger coined to describe the inner tension we feel when, among other things, our beliefs are challenged by evidence.
Lying to oneself destroys the very possibility of learning.
It hints at the suspicion that the intellectual energy of some of the world’s most formidable thinkers is directed, not at creating new, richer, more explanatory theories, but at coming up with ever-more tortuous rationalisations as to why they were right all along.
The pattern is rarely uncovered unless subjects are willing to make mistakes – that is, to test numbers that violate their belief. Instead most people get stuck in a narrow and wrong hypothesis, as often happens in real life, such that their only way out is to make a mistake that turns out not to be a mistake after all. Sometimes, committing errors is not just the fastest way to the correct answer; it’s the only way.
So far in the book, we have seen that learning from mistakes relies on two components: first, you need to have the right kind of system – one that harnesses errors as a means of driving progress; and second, you need a mindset that enables such a system to flourish.
Cognitive dissonance occurs when mistakes are too threatening to admit to, so they are reframed or ignored. This can be thought of as the internal fear of failure: how we struggle to admit mistakes to ourselves.
an adaptive process driven by the detection and response to failure.
the dangers of ‘perfectionism’: of trying to get things right first time.
The desire for perfection rests upon two fallacies. The first resides in the miscalculation that you can create the optimal solution sitting in a bedroom or ivory tower and thinking things through rather than getting out into the real world and testing assumptions, thus finding their flaws. It is the problem of valuing top-down over bottom-up. The second fallacy is the fear of failure.
Take a policy as simple as reducing the dangers of smoking by cutting tar and nicotine in cigarettes. It sounds great in theory, particularly when used in conjunction with a clever marketing campaign. It looks like a ballistic strategy perfectly designed to hit an important public health target. But when this idea was implemented in practice, it failed. Smokers compensated for the lack of nicotine by smoking more cigarettes and taking longer and deeper drags. The net result was an increase in carcinogens and carbon monoxide.
Often, failure is clouded in ambiguity. What looks like success may really be failure and vice versa.
‘It is about marginal gains,’ he said. ‘The approach comes from the idea that if you break down a big goal into small parts, and then improve on each of them, you will deliver a huge increase when you put them all together.’
‘When I first started in F1, we recorded eight channels of data. Now we have 16,000 from every single parameter on the car. And we derive another 50,000 channels from that data,’ said Paddy Lowe, a Cambridge-educated engineer, who is currently the technical leader of Mercedes F1. ‘Each channel provides information on a small aspect of performance. It takes us into the detail, but it also enables us to isolate key metrics that help us to improve.’
‘You improve your data set before you begin to improve your final function; what you are doing is ensuring that you have understood what you didn’t initially understand,’ Vowles says. ‘This is important because you must have the right information at the right time in order to deliver the right optimisation, which can further improve and guide the cycle.’
Success is about creating the most effective optimisation loop.
Creativity not guided by a feedback mechanism is little more than white noise. Success is a complex interplay between creativity and measurement, the two operating together, the two sides of the optimisation loop.
We need to run lots of trials, lots of replications, to tease out how far conclusions can be extended from one trial to other contexts.
epiphanies often happen when we are in one of two types of environment.
The first is when we are switching off: having a shower, going for a walk, sipping a cold beer, daydreaming.
We have to take a step back for the ‘associative state’ to emerge.
when we are being sparked by the dissent of others.
the breakthroughs happened at lab meetings, where groups of researchers would gather around a desk to talk through their work. Why here? Because they were forced to respond to challenges and critiques from their fellow researchers.
Creativity, then, has a dual aspect. Insight often requires taking a step back and seeing the big picture. It is about drawing together disparate ideas. It is the art of connection. But to make a creative insight work requires disciplined focus.
We learn not just by being correct, but also by being wrong. It is when we fail that we learn new things, push the boundaries, and become more creative. Nobody had a new insight by regurgitating information, however sophisticated.
Blame too much and people will clam up. Blame too little and they will become sloppy.
The Ford Motor Company, his third venture, changed the world. ‘Failure is simply the opportunity to begin again, this time more intelligently,’
The problem is when setbacks lead not to learning, but to recrimination and defeatism.