More on this book
Community
Kindle Notes & Highlights
by
Ozan Varol
Read between
October 24 - November 18, 2020
Life is taxing enough without uncertainty, so we eliminate the uncertainty by ignoring the anomaly. We convince ourselves the anomaly must be an extreme outlier or a measurement error, so we pretend it doesn’t exist.
In addition to including redundancies, rocket scientists address uncertainty by building in margins of safety. For example, they build spacecraft stronger than what appears necessary or make thermal insulation thicker than required.
The status quo is a super magnet. People are biased against the way things could be and find comfort in the way things are.
Roughly 70 percent of the time, the participants changed their correct answers and went along with the wrong answers given by the rest of the group.
Every time you introduce complexity to a system, you’re giving it one more aspect that can fail.
At best, we pay lip service to curiosity but end up discouraging it in practice.
We have to generate ideas first before we can begin evaluating and eliminating them. If we cut the accumulation process short—if we immediately start thinking about consequences—we run the risk of hampering originality.
Backcasting reorients you toward the path. If you want to climb a mountain, you’ll imagine training with your backpack on, hiking at high altitudes to get used to the low-oxygen environment, climbing stairs to build up muscles, and running to improve endurance.
With this reorientation, you also condition yourself to derive intrinsic value from the process rather than chasing elusive outcomes.
If we restrict ourselves to what’s possible given what we have, we’ll never reach escape velocity and create a future worth getting excited about. In the end, all moonshots are impossible. Until you decide to go.
We quickly jump from “This sounds right to me” to “This is true.” Confirming our theories feels good. We get a hit of dopamine every time we’re proven right.
From a scientific perspective, opinions present several problems. Opinions are sticky. Once we form an opinion—our own very clever idea—we tend to fall in love with it, particularly when we declare it in public through an actual or a virtual megaphone.
To make sure you don’t fall in love with a single hypothesis, generate several.
Ideally, the hypotheses you spin should conflict with each other.
When our focus shifts from proving ourselves right to proving ourselves wrong, we seek different inputs, we combat deeply entrenched biases, and we open ourselves up to competing facts and arguments.
Once you’ve stress-tested your ideas by trying to prove yourself wrong, it’s now time to collide those ideas with reality in tests and experiments.
We don’t rise to the level of our expectations. We fall to the level of our training.
We conduct tests—not to prove ourselves wrong, but to confirm what we believe is true. We tweak the testing conditions or interpret ambiguous outcomes to confirm our preconceptions.
In a proper test, the goal isn’t to discover everything that can go right. Rather, the goal is to discover everything that can go wrong and to find the breaking point.
Great comedians also think like rocket scientists and test their material before an actual audience to observe their reaction.
The goal isn’t to fail fast. It’s to learn fast. We should be celebrating the lessons from failure—not failure itself.
The goal, then, is to focus on the variables you can control—the inputs—instead of the outputs.
In one study, cardiac surgeons who observed their colleagues’ blunders got significantly better at performing the procedure.
The worst-performing teams—those that were in most need of improvement—were also the least likely to report errors. And if errors aren’t reported, the team can’t improve.
When we succeed, we stop pushing boundaries. Our comfort sets a ceiling, with our frontiers shrinking rather than extending. Corporate executives are rarely punished for deviating from a historically successful strategy.
As a result, instead of risking something new, we maintain the same “proven” formula that led to our success. This tactic works well—until it doesn’t.
Most corporations perish because they ignore the baby steps, the weak signals, the near misses that don’t immediately affect outcomes.
What went wrong with this success? What role did luck, opportunity, and privilege play? What can I learn from it? If we don’t ask these questions, luck will eventually run its course, and the near misses will catch up with us.
In a premortem, we travel forward in time and set up a thought experiment where we assume the project failed. We then step back and ask, “What went wrong?” By vividly visualizing a doomsday scenario, we come up with potential problems and determine how to avoid them.
According to research, premortems increase by 30 percent the ability of participants to correctly determine the reasons for a future outcome.
If we know there was uncertainty attached to a previous decision, it becomes easier to challenge it.
It’s no wonder that groupthink pops up even in organizations whose lifeblood is creativity. Faced with potential backlash, we censor ourselves rather than go against the grain.