More on this book
Community
Kindle Notes & Highlights
by
Annie Duke
Read between
October 31 - November 26, 2023
Thinking in bets starts with recognizing that there are exactly two things that determine how our lives turn out: the quality of our decisions and luck. Learning to recognize the difference between the two is what thinking in bets is all about.
Why did so many people so strongly believe that Pete Carroll got it so wrong? We can sum it up in four words: the play didn’t work. Take a moment to imagine that Wilson completed the pass for a game-winning touchdown. Wouldn’t the headlines change to “Brilliant Call” or “Seahawks Win Super Bowl on Surprise Play” or “Carroll Outsmarts Belichick”?
Reminds me of Taleb and the airport security guy on 9/10
In fact, this whole intro reminds me of Fooled by Randomness
Pete Carroll was a victim of our tendency to equate the quality of a decision with the quality of its outcome. Poker players have a word for this: “resulting.”
His decision-making behavior going forward reflected the belief that he made a mistake. He was not only resulting but also succumbing to its companion, hindsight bias. Hindsight bias is the tendency, after an outcome is known, to see the outcome as having been inevitable.
von Neumann is a hero of mine,
a challenge that doesn’t exist in chess: identifying the relative contributions of the decisions we make versus luck in how things turn out.
Our lives are too short to collect enough data from our own experience to make it easy to dig down into decision quality from the small set of results we experience.
This keeps me up at night: how much of tech slaries was just ZIRP? How much of investment advice is judt riding the post WWII US bull run?
great quote from physicist James Clerk Maxwell: “Thoroughly conscious ignorance is the prelude to every real advance in science.” I would add that this is a prelude to every great decision that has ever been made.
Being right feels really good. “I was right,” “I knew it,” “I told you so”—those are all things that we say, and they all feel very good to us. Should we be willing to give up the good feeling of “right” to get rid of the anguish of “wrong”? Yes.
consequences, and probabilities. By treating decisions as bets, poker players explicitly recognize that they are deciding on alternative futures, each with benefits and risks.
In most of our decisions, we are not betting against another person. Rather, we are betting against all the future versions of ourselves that we are not choosing.
It turns out the better you are with numbers, the better you are at spinning those numbers to conform to and support your beliefs.
We can’t just “absorb” experiences and expect to learn. As novelist and philosopher Aldous Huxley recognized, “Experience is not what happens to a man; it is what a man does with what happens to him.” There is a big difference between getting experience and becoming an expert. That difference lies in the ability to identify when the outcomes of our decisions have something to teach us and what that lesson might be.
The challenge is that any single outcome can happen for multiple reasons. The unfolding future is a big data dump that we have to sort and interpret. And the world doesn’t connect the dots for us between outcomes and causes.
make similar bets about where to “throw” an outcome: into the “skill bucket” (in our control) or the “luck bucket” (outside of our control). This initial fielding of outcomes, if done well, allows us to focus on experiences that have something to teach us (skill) and ignore those that don’t (luck).
When it comes to self-serving bias, we act as if our good outcomes are perfectly correlated to good skill and our bad outcomes are perfectly correlated to bad luck.
A lot of the way we feel about ourselves comes from how we think we compare with others. This robust and pervasive habit of mind impedes learning.
When we look at the people performing at the highest level of their chosen field, we find that the self-serving bias that interferes with learning often recedes and even disappears. The people with the most legitimate claim to a bulletproof self-narrative have developed habits around accurate self-critique.
Keep the reward of feeling like we are doing well compared to our peers, but change the features by which we compare ourselves: be a better credit-giver than your peers, more willing than others to admit mistakes, more willing to explore possible reasons for an outcome with an open mind, even, and especially, if that might cast you in a bad light or shine a good light on someone else.
Thinking in bets triggers a more open-minded exploration of alternative hypotheses, of reasons supporting conclusions opposite to the routine of self-serving bias. We are more likely to explore the opposite side of an argument more often and more seriously—and that will move us closer to the truth of the matter.
not all situations are appropriate for truthseeking, nor are all people interested in the pursuit.
focus on the things I could control (my own decisions), let go of the things I couldn’t (luck), and work to be able to accurately tell the difference between the two.
if we can find a few people to choose to form a truthseeking pod with us and help us do the hard work connected with it, it will move the needle—just a little bit, but with improvements that accumulate and compound over time. We will be more successful in fighting bias, seeing the world more objectively, and, as a result, we will make better decisions. Doing it on our own is just harder.
“Complex and open-minded thought is most likely to be activated when decision makers learn prior to forming any opinions that they will be accountable to an audience (a) whose views are unknown, (b) who is interested in accuracy, (c) who is reasonably well-informed, and (d) who has a legitimate reason for inquiring into the reasons behind participants’ judgments/choices.” Their 2002 paper was one of several they coauthored supporting the conclusion that groups can improve the thinking of individual decision-makers when the individuals are accountable to a group whose interest is in accuracy.
“I don’t want to hear it. I’m not trying to hurt your feelings, but if you have a question about a hand, you can ask me about strategy all day long. I just don’t think there’s much purpose in a poker story if the point is about something you had no control over, like bad luck.”
Since 2005, Scalia had hired no clerks with experience working for Democrat-appointed judges. In light of the shift in hiring practices, it should not be so surprising that the court has become more polarized. The justices are in the process of creating their own echo chambers.
CUDOS stands for Communism (data belong to the group), Universalism (apply uniform standards to claims and evidence, regardless of where they came from), Disinterestedness (vigilance against potential conflicts that can influence the group’s evaluation), and Organized Skepticism (discussion among the group to encourage engagement and dissent). If you want to pick a role model for designing a group’s practical rules of engagement, you can’t do better than Merton.
a data sharer. That’s what experts do. In fact, that’s one of the reasons experts become experts. They understand that sharing data is the best way to move toward accuracy because it extracts insight from your listeners of the highest fidelity.
The Mertonian norm of universalism is the converse. “Truth-claims, whatever their source, are to be subjected to preestablished impersonal criteria.” It means acceptance or rejection of an idea must not “depend on the personal or social attributes of their protagonist.”
we hear an account from someone we like, imagine if someone we didn’t like told us the same story, and vice versa. This can be incorporated into an exploratory group’s work, asking each other, “How would we feel about this if we heard it from a much different source?”
Telling someone how a story ends encourages them to be resulters, to interpret the details to fit that outcome. If I won a hand, it was more likely my group would assess my strategy as good. If I lost, the reverse would be true. Win a case at trial, the strategy is brilliant. Lose, and mistakes were made. We treat outcomes as good signals for decision quality, as if we were playing chess. If the outcome is known, it will bias the assessment of the decision quality to align with the outcome quality.
If we don’t “lean over backwards” (as Richard Feynman famously said) to figure out where we could be wrong, we are going to make some pretty bad bets.
Ultimately, even with our own kids’ decisions, rehashing outcomes can create defensiveness. The future, on the other hand, can always be better if we can get them to focus on things in their control.
When making decisions, isolating ourselves from thinking about similar decisions in the past and possible future consequences is frequently the very thing that turns us into a blob, mired by in-the-moment thinking where the scope of time is distorted. As decision-makers, we want to collide with past and future versions of ourselves. Our capacity for mental time travel makes this possible.
Suzy Welch developed a popular tool known as 10-10-10 that has the effect of bringing future-us into more of our in-the-moment decisions. “Every 10-10-10 process starts with a question. . . . [W]hat are the consequences of each of my options in ten minutes? In ten months? In ten years?” This set of questions triggers mental time travel that cues that accountability conversation (also encouraged by a truthseeking decision group). We can build on Welch’s tool by asking the questions through the frame of the past: “How would I feel today if I had made this decision ten minutes ago? Ten months
...more
When I reached that point in a session and considered continuing past that time limit, I could use a 10-10-10-like strategy to recruit my past- and future-self: How have I felt when I kept playing in the past? How has it generally worked out? When I look back, do I feel I was playing my best? This routine of asking myself these questions helped mitigate the in-the-moment risk that, as I was losing my mental edge, I might try to convince myself that the game was so great that I had to keep playing.
overestimation of the impact of any individual moment on our overall happiness is the emotional equivalent of watching the ticker in the financial world. We make a long-term stock investment because we want it to appreciate over years or decades.
Our problem is that we’re ticker watchers of our own lives. Happiness (however we individually define it) is not best measured by looking at the ticker, zooming in and magnifying moment-by-moment or day-by-day movements. We would be better off thinking about our happiness as a long-term stock holding. We would do well to view our happiness through a wide-angle lens, striving for a long, sustaining upward trend in our happiness stock, so it resembles the first Berkshire Hathaway chart.
The way we field outcomes is path dependent. It doesn’t so much matter where we end up as how we got there. What has happened in the recent past drives our emotional response much more than how we are doing overall.
By recognizing in advance these verbal and physiological signs that ticker watching is making us tilt, we can commit to develop certain habit routines at those moments. We can precommit to walk away from the situation when we feel the signs of tilt, whether it’s a fight with a spouse or child, aggravation in a work situation, or losing at a poker table.
When we take the long view, we’re going to think in a more rational way.
Regardless of the level of binding, precommitment contracts trigger a decision-interrupt. At the moment when we consider breaking the contract, when we want to cut the binding, we are much more likely to stop and think.
For the decision swear jar, we identify the language and thinking patterns that signal we are veering from our goal of truthseeking.
here is a sample of the kinds of things that might trigger a decision-interrupt. Signs of the illusion of certainty: “I know,” “I’m sure,” “I knew it,” “It always happens this way,” “I’m certain of it,” “you’re 100% wrong,” “You have no idea what you’re talking about,” “There’s no way that’s true,” “0%” or “100%” or their equivalents, and other terms signaling that we’re presuming things are more certain than we know they are. This also includes stating things as absolutes, like “best” or “worst” and “always” or “never.” Overconfidence: similar terms to the illusion of certainty. Irrational
...more
This highlight has been truncated due to consecutive passage length restrictions.
For us to make better decisions, we need to perform reconnaissance on the future. If a decision is a bet on a particular future based on our beliefs, then before we place a bet we should consider in detail what those possible futures might look like. Any decision can result in a set of possible outcomes.