More on this book
Community
Kindle Notes & Highlights
Read between
December 31, 2019 - May 22, 2020
the facts don’t hang together on a latticework of theory, you don’t have them in a usable form. You’ve got to have models in your head
critical mass, the mass of nuclear material needed to create a critical state whereby a nuclear chain reaction is possible.
Well, the first rule is that you can’t really know anything if you just remember isolated facts and try and bang ’em back. If the facts don’t hang together on a latticework of theory, you don’t have them in a usable form. You’ve got to have models in your head. And you’ve got to array your experience—both vicarious and direct—on this latticework of models.
you can’t really know anything if you just remember isolated facts and try and bang ’em back.
When you don’t use mental models, strategic thinking is like using addition when multiplication is available to you.
When I urge a multidisciplinary approach … I’m really asking you to ignore jurisdictional boundaries. If you want to be a good thinker, you must develop a mind that can jump these boundaries. You don’t have to know it all. Just take in the best big ideas from all these disciplines. And it’s not that hard
The concept of inverse thinking can help you with the challenge of making good decisions. The inverse of being right more is being wrong less. Mental models are a tool set that can help you be wrong less. They are a collection of concepts that help you more effectively navigate our complex world.
Some things benefit from shocks; they thrive and grow when exposed to volatility, randomness, disorder, and stressors and love adventure, risk, and uncertainty. Yet, in spite of the ubiquity of the phenomenon, there is no word for the exact opposite of fragile. Let us call it antifragile.
Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better.
If your thinking is antifragile, then it gets better over time as you learn from your mistakes and interact with your surroundings.
It’s also the difference between being a chef—someone who can take ingredients and turn them into an amazing dish without looking at a cookbook—and being the kind of cook who just knows how to follow a recipe.
The central mental model to help you become a chef with your thinking is arguing from first principles. It’s the practical starting point to being wrong less, and it means thinking from the bottom up, using basic building blocks of what you think is true to build sound (and sometimes new) conclusions. First principles are the group of self-evident assumptions that make up the foundation on which your conclusions rest—the ingredients in a recipe or the mathematical axioms that underpin a formula.
arguing from first principles. It’s the practical starting point to being wrong less, and it means thinking from the bottom up, using basic building blocks of what you think is true to build sound (and sometimes new) conclusions. First principles are the group of self-evident assumptions that make up the foundation on which your conclusions rest—the ingredients in a recipe or the mathematical axioms that underpin a formula.
If you can argue from first principles, then you can more easily approach unfamiliar situations, or approach familiar situations in innovative ways.
When arguing from first principles, you are deliberately starting from scratch. You are explicitly avoiding the potential trap of conventional wisdom, which could turn out to be wrong. Even if you end up in agreement with conventional wisdom, by taking the first-principles approach, you will gain a much deeper understanding of the subject at hand.
Once you identify the critical assumptions to de-risk, the next step is actually going out and testing these assumptions, proving or disproving them, and then adjusting your strategy appropriately.
Unfortunately, people often make the mistake of doing way too much work before testing assumptions in the real world. In computer science this trap is called premature optimization, where you tweak or perfect code or algorithms (optimize) too early (prematurely). If your assumptions turn out to be wrong, you’re going to have to throw out all that work, rendering it ultimately a waste of time.
Back in startup land, there is another mental model to help you test your assumptions, called minimum viable product, or MVP. The MVP is the product you are developing with just enough features, the minimum amount, to be feasibly, or viably, tested by real people.
“Everybody has a plan until they get punched in the mouth.”
with. Ockham’s razor helps here. It advises that the simplest explanation is most likely to be true. When you encounter competing explanations that plausibly explain a set of data equally well, you probably want to choose the simplest one to investigate first.
This model is a razor because it “shaves off” unnecessary assumptions.
“Everything should be made as simple as it can be, but not simpler!” In medicine, it’s known by this saying: “When you hear hoofbeats, think of horses, not zebras.
A practical tactic is to look at your explanation of a situation, break it down into its constituent assumptions, and for each one, ask yourself: Does this assumption really need to be here? What evidence do I have that it should remain? Is it a false dependency?
First, most people are, unfortunately, hardwired to latch onto unnecessary assumptions, a predilection called the conjunction fallacy, studied
The fallacy arises because the probability of two events in conjunction is always less than or equal to the probability of either one of the events occurring alone, a concept illustrated in the Venn diagram on the next page.
Overfitting occurs when you use an overly complicated explanation when a simpler one will do. It’s what happens when you don’t heed Ockham’s razor, when you get sucked into the conjunction fallacy or make a similar unforced error. It can occur in any situation where an explanation introduces unnecessary assumptions.
physics your perspective is called your frame of reference, a concept central to Einstein’s theory of relativity.
In fact, everything but the speed of light—even time—appears different in different frames of reference.
A frame-of-reference mental trap (or useful trick, depending on your perspective) is framing. Framing refers to the way you present a situation or explanation. When you present an important issue to your coworker or family member, you try to frame it in a way that might help them best understand your perspective, setting the stage for a beneficial conversation.
framing. Framing refers to the way you present a situation or explanation.
When you present an important issue to your coworker or family member, you try to frame it in a way that might help them best understand your perspective, sett...
This highlight has been truncated due to consecutive passage length restrictions.
You can be nudged in a direction by a subtle word choice or other environmental cues.
Another concept you will find useful when making purchasing decisions is anchoring, which describes your tendency to rely too heavily on first impressions when making decisions.
You get anchored to the first piece of framing information you encounter. This tendency is commonly exploited by businesses when making offers.
More broadly, these mental models are all instances of a more general model, availability bias, which occurs when a bias, or distortion, creeps into your objective view of reality thanks to information recently made available to you.
Availability bias can easily emerge from high media coverage of a topic. Rightly or wrongly, the media infamously has a mantra of “If it bleeds, it leads.”
Availability bias stems from overreliance on your recent experiences within your frame of reference, at the expense of the big picture.
Because of availability bias, you’re likely to click on things you’re already familiar with, and so Google, Facebook, and many other companies tend to show you more of what they think you already know and like. Since there are only so many items they can show you—only so many links on page one of the search results—they therefore filter out links they think you are unlikely to click on, such as opposing viewpoints, effectively placing you in a bubble.
When you put many similar filter bubbles together, you get echo chambers, where the same ideas seem to bounce around the same
echo chambers, where the same ideas seem to bounce around the same groups of people, echoing around the collective chambers of these connected filter bubbles.
groups of people, echoing around the collective chambers of these connected filter bubbles. Echo chambers result in increased partisanship, as people have less and less exposure to alternative viewpoints. And because of availability bias, they consistently overestimate the percentage of people who hold the same opinions. It’s easy to focus solely on what is put in front of you. It’s much harder...
This highlight has been truncated due to consecutive passage length restrictions.
Echo chambers result in increased partisanship, as people have less and less exposure to alternative viewpoints. And because of availability bias, they consistently overestimate the...
This highlight has been truncated due to consecutive passage length restrictions.
easy to focus solely on what is put in front of you. It’s much harder to seek out an objective frame of reference, but that is what you...
This highlight has been truncated due to consecutive passage length restrictions.
Forcing yourself to think as an impartial observer can help you in any conflict
situation, including difficult business negotiations and personal disagreements.
Sheila Heen explore this model in detail in their book Difficult Conversations: “The key is learning to describe the gap—or difference—between your story and the other person’s story. Whatever else you may think and feel, you can at least agree that you and the other person see things differently.
you acknowledge the perspective of the third story within difficult conversations, it can have a disarming effect, causing others involved to act less defensively. That’s because you are signaling your willingness and ability to consider an objective point of view.
Another tactical model that can help you empathize is the most respectful interpretation, or MRI. In any situation, you can explain a person’s behavior in many ways. MRI asks you to you interpret the other parties’ actions in the most respectful way possible. It’s giving people the benefit of the doubt.
called Hanlon’s razor: never attribute to malice that which is adequately explained by carelessness. Like Ockham’s razor, Hanlon’s razor seeks out the simplest explanation. And when people do something harmful, the simplest explanation is usually that they took the path of least resistance. That is, they carelessly created the negative outcome; they did not cause the outcome out of malice.
psychologists call the fundamental attribution error, where you frequently make errors by attributing others’ behaviors to their internal,