More on this book
Community
Kindle Notes & Highlights
by
Matthew Syed
Read between
January 8 - January 19, 2021
Memory, it turns out, is not as reliable as we think. We do not encode high-definition movies of our experiences and then access them at will. Rather, memory is a system dispersed throughout the brain and is subject to all sorts of biases.
We try to make the memory fit with what we now know rather than what we once saw.
One reform that could help to eliminate false confessions would be to make the videotaping of interrogations compulsory. This would undermine any incentive to bully or mislead suspects into confessions.
There is also the problem of the large number of trials where innocent defendants are put in the dock. The data suggest that the acquittal rate is high. That is often hailed as evidence that the justice system is rigorously acquitting the innocent, but it could also mean that millions of pounds are being wasted on unnecessary trials, with the real culprit still at large.
The key issue in all of this, however, is not to allow the perceived trade-offs between these objectives to obscure the deeper fact that progress can be made on each of them at the same time.
if the case was assessed by a judge just after he had eaten breakfast, the prisoner had a 65 percent chance of getting parole. But as time passed through the morning, and the judges got hungry, the chances of parole gradually diminished to zero. Only after the judges had taken a break to eat did the odds shoot back up to 65 percent, only to decrease back to 0 over the course of the afternoon.
“There are no checks about the judges’ decisions because no one has ever documented this tendency before.
It is illegal in the UK to even conduct a study on how juries go about their deliberations. The unstated rationale for this prohibition is that if the public find out how juries operate, they might lose confidence in the system. It is an “ignorance is bliss” approach. But this is as intellectually fraudulent as removing the black box from an airplane to insure that people won’t ever find out about pilot error. The result is inevitable: the same mistakes will be made, over and over.
“When we think about miscarriages of justice, we often focus on the person who has been jailed for a crime he didn’t commit,” Steve Art, a New York lawyer, said.21 “But there are other consequences, too. When you convict the wrong person, the real criminal is left to roam the streets, committing crimes with sometimes devastating effects. It is yet another reason why we need to learn the lessons.”
Progress had been delivered not through a beautifully constructed master plan (there was no plan), but by rapid interaction with the world. A single, outstanding nozzle was discovered as a consequence of testing, and discarding, 449 failures.
learning from mistakes relies on two components: first, you need to have the right kind of system—one that harnesses errors as a means of driving progress; and second, you need a mindset that enables such a system to flourish.
What the development of the nozzle reveals, above all, is the power of testing. Even though the biologists knew nothing about the physics of phase transition, they were able to develop an efficient nozzle by trialing lots of different ones, rejecting those that didn’t work and then varying the best nozzle in each generation.
the rejected nozzles helped to drive the progression of the design. They all share an essential pattern: an adaptive process driven by the detection and response to failure.
Evolution as a process is powerful because of its cumulative nature.
Cumulative selection works, then, if there is some form of “memory”: i.e., if the results of one selection test are fed into the next, and into the next, and so on. This process is so powerful that, in the natural world, it confers what has been called “the illusion of design”: animals that look as if they were designed by a vast intelligence when they were, in fact, created by a blind process. An echo of this illusion can be seen in the nozzle example. The final shape is so uniquely suited to creating fine-grained detergent that it invites the thought that a master designer must have been at
...more
markets work not in spite of the many business failures that occur, but because of them.
Take the first steam engine for pumping water. This was built by Thomas Newcomen, a barely literate, provincial ironmonger and Baptist lay preacher, and developed further by James Watt. The understanding of both men was intuitive and practical. But the success of the engine raised a deep question: why does this incredible device actually work (it broke the then laws of physics)? This question inspired Nicolas Léonard Sadi Carnot, a French physicist, to develop the laws of thermodynamics. Trial and error inspired the technology, which in turn inspired the theory. This is the linear model in
...more
Theoretical change is itself driven by a feedback mechanism, as we noted in chapter 3: science learns from failure. But when a theory fails, like say when the Unilever mathematicians failed in their attempt to create an efficient nozzle design, it takes time to come up with a new, all-encompassing theory. To gain practical knowledge, however, you just need to try a different-sized aperture. Tinkering, tweaking, learning from practical mistakes: all have speed on their side. Theoretical leaps, while prodigious, are far less frequent.
technological progress is a complex interplay between theoretical and practical knowledge, each informing the other in an upward spiral*. But we often neglect the messy, iterative, bottom-up aspect of this change because it is easy to regard the world, so to speak, in a top-down way. We try to comprehend it from above rather than discovering it from below.
there is a profound obstacle to testing, a barrier that prevents many of us from harnessing the upsides of the evolutionary process.
we are hardwired to think that the world is simpler than it really is. And if the world is simple, why bother to conduct tests? If we already have the answers, why would we feel inclined to challenge them?
narrative fallacy.
our propensity to create stories about what we see after the event.
That is the power of the narrative fallacy. We are so eager to impose patterns upon what we see, so hardwired to provide explanations that we are capable of “explaining” opposite outcomes with the same cause without noticing the inconsistency.
In aviation there is a profound respect for complexity. Pilots and system experts are deeply aware that they are dealing with a world they do not fully understand, and never will. They regard failures as an inevitable consequence of the mismatch between the complexity of the system and their capacity to understand it. This reduces the dissonance of mistakes, increases the motivation to test assumptions in simulators and elsewhere, and makes it “safe” for people to speak up when they spot issues of concern. The entire system is about preventing failure, about doing everything possible to stop
...more
the dangers of “perfectionism”: of trying to get things right the first time.
The desire for perfection rests upon two fallacies. The first resides in the miscalculation that you can create the optimal solution sitting in a bedroom or ivory tower and thinking things through rather than getting out into the real world and testing assumptions, thus finding their flaws. It is the problem of valuing top-down over bottom-up. The second fallacy is the fear of failure. Earlier on we looked at situations where people fail and then proceed to either ignore or conceal those failures. Perfectionism is, in many ways, more extreme. You spend so much time designing and strategizing
...more
Instead of designing a product from scratch, techies attempt to create a “minimum viable product” or MVP. This is a prototype with sufficient features in common with the proposed final product that it can be tested on early adopters (the kind of consumers who buy products early in the life cycle and who influence other people in the market).
The problem today, he says, is that we operate with a ballistic model of success. The idea is that once you’ve identified a target (creating a new website, designing a new product, improving a political outcome) you come up with a really clever strategy designed to hit the bull’s-eye. You construct the perfect rifle. You create a model of how the bullet will be affected by wind and gravity. You do your math to get the strategy just right. Then you calibrate the elevation of the rifle, pull the trigger, and watch as the bullet sails toward the target. This approach is flawed for two reasons.
...more
This highlight has been truncated due to consecutive passage length restrictions.
the guided-missile approach. Sure, you want to design a great rifle, you want to point it at the target, and you want to come up with a decent model of how it will be affected by the known variables, such as the wind and gravity. But it is also vital to react to what happens after you pull the trigger.
Success is not just dependent on before-the-event reasoning, it is also about after-the-trigger adaptation. The more you can detect failure (i.e., deviation from the target), the more you can finesse the path of the bullet onto the right track.
It is by getting the balance right between top-down strategy and a rigorous adaptation process that you hit the target. It is fusing what we already know, and what we can still learn.
success will not just be about intelligence and talent. These things are important; but they should never overshadow the significance of identifying where one’s strategy is going wrong, and evolving.
In the absence of data, narrative is the best we have.
the most important issue when it comes to charitable giving is not just raising more money, but conducting tests, understanding what is working and what isn’t, and learning. Instead of trusting in narrative, we should be wielding the power of the evolutionary mechanism.
One of the ironies of charitable spending is that the one statistic many donors do tend to look at can actually undermine the pursuit of evidence. The so-called overhead ratio measures the amount of money spent on administration compared with the front line. Most donors are keen for charities to keep this ratio low: they want money to go to those who really need it rather than office staff. But given that evidence-gathering counts as an administrative cost rather than treatment, this makes it even more difficult for charities to conduct tests.
deterrence.
Its effectiveness was attested to by judges, correction officers, and other experts.
But there turned out to be one rather large problem with Scared Straight. It didn’t work. Rigorous testing would later prove that the kids who were taken on prison visits were more likely to commit offenses in the future, not less—as we shall see. A more appropriate name for Scared Straight might have been Scared Crooked.
How can something be a failure when the statistics seem to show that it is a success? How can it be failing when virtually every expert is lining up to endorse it?
The randomized control trial.
Closed loops are often perpetuated by people covering up mistakes. They are also kept in place when people spin their mistakes, rather than confronting them head on. But there is a third way that closed loops are sustained over time: through skewed interpretation.
the “counterfactual.” It is all the things that could have happened but which in everyday experience we never observe because we did something else. We don’t observe what would have happened if we had not gotten married. Or see what would have happened if we had taken a different job. We can speculate on what would have happened, and we can make decent guesses. But we don’t really know.
randomized control trial (RTC); in medicine it is called a clinical trial.
Much real-world failure is not like this. Often, failure is clouded in ambiguity. What looks like success may really be failure and vice versa. And this, in turn, represents a serious obstacle to progress. After all, how can you learn from failure if you are not sure you have actually failed?
a concrete example, suppose you redesign your company website and that sales subsequently increase. That might lead you to believe that the redesign of the website caused the boost in sales. After all, one preceded the other. But how can you be sure? Perhaps sales went up not because of the new website, but because a rival went bust, or interest rates went down, or because it was a rainy month and more people shopped online. Indeed, it is entirely possible that sales would have gone up even more if you had not changed the website. Looking at the sales statistics is not going to help you find
...more
“The Randomised Control Trial is one of the greatest inventions of modern science.”8 It is probably worth emphasizing that RCTs are not a panacea. There are situations where they are difficult to use and where they might be considered unethical. And trials have often been rigged in subtle ways by pharmaceutical companies eager to come up with an answer that they have already prejudged.9 But these are not arguments against randomized trials, merely against how they have been corrupted by people with dubious motives. Another objection is that randomized trials neglect the holistic nature of a
...more
Handled with care, they cut through the ambiguity that can play havoc with our interpretation of feedback.
the example of the redesigned website mentioned earlier. The problem was in establishing whether the change in the design had increased sales, or was caused by something else. But suppose you randomly direct users to either the new or the old design. You could then measure whether they buy more goods from the former or the latter.
the remarkable thing is that in many areas of human life RCTs have hardly been used at all.