Left Brain, Right Stuff takes up where other books about decision making leave off. For many routine choices, from shopping to investing, we can make good decisions simply by avoiding common errors, such as searching only for confirming information or avoiding the hindsight bias. But as Phil Rosenzweig shows, for many of the most important, more complex situations we face--in business, sports, politics, and more--a different way of thinking is required. Leaders must possess the ability to shape opinions, inspire followers, manage risk, and outmaneuver and outperform rivals. Making winning decisions calls for a combination of skills: clear analysis and calculation--left brain--as well as the willingness to push boundaries and take bold action--right stuff. Of course leaders need to understand the dynamics of competition, to anticipate rival moves, to draw on the power of statistical analysis, and to be aware of common decision errors--all features of left brain thinking. But to achieve the unprecedented in real-world situations, much more is needed. Leaders also need the right stuff. In business, they have to devise plans and inspire followers for successful execution; in politics, they must mobilize popular support for a chosen program; in the military, commanders need to commit to a battle strategy and lead their troops; and in start-ups, entrepreneurs must manage risk when success is uncertain. In every case, success calls for action as well as analysis, and for courage as well as calculation. Always entertaining, often surprising, and immensely practical, Left Brain, Right Stuff draws on a wealth of examples in order to propose a new paradigm for decision making in synch with the way we have to operate in the real world. Rosenzweig's smart and perceptive analysis of research provides fresh, and often surprising, insights on topics such as confidence and overconfidence, the uses and limits of decision models, the illusion of control, expert performance and deliberate practice, competitive bidding and new venture management, and the true nature of leadership.
I recently finished reading Left Brain Right Stuff - How Leaders Make Winning Decisions by Phil Rosenzweig. The author had graciously provided me with a copy of his new book, as I had previously read and reviewed an earlier work of his (The Halo Effect).
Below are key excerpts from the book that I found particularly insightful:
1- "They make predictable errors, or biases, which often undermine their decisions. By now we're familiar with many of these errors, including the following: -People are said to be overconfident, too sure of themselves and unrealistically optimistic about the future. -People look for information that will confirm what they want to believe, rather than seeking information that might challenge their hopes. -People labor under the illusion of control, imagining they have more influence over events than they really do. -People are fooled by random events, seeing patterns where none exist. People are not good intuitive statisticians, preferring a coherent picture to what makes sense according to the laws of probability. -People suffer from a hindsight bias, believing that they were right all along."
2- "Yet for all we know about these sorts of decisions, we know less about others. First, many decisions involve much more than choosing from options we cannot influence or evaluations of things we cannot affect...Second, many decisions have a competitive dimension...Third, many decisions take a long time before we know the results...Fourth, many decisions are made by leaders of organizations...In sum, experiments have been very effective to isolate the processes of judgment and choice, but we should be careful when applying their findings to very different circumstances."
3- "Great decisions call for clear analysis and dispassionate reasoning. Using the left brain means: -knowing the difference between what we can control and what we cannot, between action and prediction -knowing the difference between absolute and relative performance, between times when we need to do well and when we must do better than others -sensing whether it's better to err on the side a' taking action and failing, or better not to act; that is, between what we call Type I and Type II errors -determining whether we are acting as lone individuals or as leaders in an organizational setting and inspiring others to achieve high performance -recognizing when models can help us make better decisions, but also being aware of their limits."
4- "Having the right stuff means: -summoning high levels of confidence, even levels that might seem excessive, but that are useful to achieve high performance -going beyond past performance and pushing the envelope to seek levels that are unprecedented -instilling in others the willingness to take appropriate risks."
5- "Moore and his colleagues ran several other versions of this study, all of which pointed to the same conclusion: people do not consistently overestimate their level of control. A simpler explanation is that people have an imperfect understanding of how much control they can exert. When control is low they tend to overestimate. but when it's high they tend to underestimate."
6- "Of course managers don't have complete control over outcomes. any more than a doctor has total control over patient health. They are buffeted by events outside their control: macroeconomic factors, changes in technology, actions of rivals, and so forth. Yet it's a mistake to conclude that managers suffer from a pervasive illusion of control. The greater danger is the opposite: that they will underestimate the extent of control they truly have."
7- "If you believe there's an intense pressure to outperform rivals when that's not the case, you might prefer a Type 1 error. You might take action sooner than necessary or act more aggressively when the better approach would be to wait and observe. The risks can be considerable, but perhaps not fatal On the other hand, if performance is not only relative but payoffs are highly skewed, and you don't make every effort to outperform rivals, you'll make a Type II error. Here the consequences can be much more severe. Fail now, and you may never get another chance to succeed. By this logic, the greater error is to underestimate the intensity of competition. It's to be too passive in the face of what could be a mortal threat. When in doubt, the smart move is to err on the side of taking strong action."
8- "The lesson is clear: in a competitive setting, even a modest improvement in absolute performance can have a huge impact on relative performance. And conversely, failing to use all possible advantages to improve absolute performance has a crippling effect on the likelihood of winning. Under these circumstances, finding a way to do better isn't just nice to have. For all intents and purposes, it's essential."
9- "First, not even thing that turns out badly is due to an error. We live in a world of uncertainty, in which there's an imperfect link between actions and outcomes. Even good decisions sometimes turn out badly, but that doesn't necessarily mean anyone made an error. Second, not every error is the result of overconfidence. There are many kinds off error: errors of calculation, errors of memory, simple motor errors, tactical errors, and so forth. They're not all due to overconfidence."
10- "The Trouble with Overconfidence," the single word—overconfidence—has been used to mean three very different things, which they call overprecision, overestimation, and overplacement...Overprecision is the tendency to be too certain that our judgment is correct...He's referring to overprecision: the tendency to believe a prediction is more accurate than it turns out to be...Overestimation, the second kind of overconfidence, is a belief that we can perform at a level beyond what is objectively warranted...Overestimation is an absolute evaluation; it depends on an assessment of ourselves and no one else...Overplacement, the third kind of overconfidence, is a belief that we can perform better than others...She calls it the superiority bias and says it's a pervasive error."
11- "My suggestion is that anyone who uses the term should have to specify the point of comparison. If overconfidence means excessively confident, then excessive compared to what? In much of our lives, where we can exert control and influence outcomes, what seems to be an exaggerated level of confidence may be useful; and when we add the need to outperform rivals, such a level of confidence may even be essential."
12- "When we have ability to shape events we confront a different challenge: making accurate estimates of future performance. The danger here is not one of overlooking the base rate of the broader population at a point in time, but neglecting lessons of the past and making a poor prediction of the future. Very often people place great importance on their (exaggerated) level of skills and motivation. The result is to make forecasts on what Kahneman and Tversky call the inside view. Unfortunately these projections, which ignore the experiences of others who have attempted similar tasks, often turn out to be wildly optimistic."
13-"The question we often hear—how much optimism or confidence is good, and how much is too much—turns out to be incomplete. There's no reason to imagine that optimism or confidence must remain steady over time. It's better to ramp it up and down, emphasizing a high level of confidence during moments of implementation, but setting it aside to learn from feedback and find ways to do better."
14- "Duration is short, feedback is immediate and clear, the order is sequential, and performance is absolute. When these conditions hold, deliberate practice can be hugely powerful. As we relax each of them, the picture changes. Other tasks are long in duration, have feedback that is slow or incomplete, must be undertaken concurrently, and involve performance that is relative. None of this is meant to suggest their deliberate practice isn't a valuable technique. But we have to know when it's useful and when it's not."
15- "When we use models without a clear understanding of when they are appropriate, we're not going to make great decisions—no matter how big the data set or how sophisticated the model appears to be."
16- "To get at the root of the problem, Capen looked at the auction process itself. He discovered an insidious dynamic: when a large number of bidders place secret bids, it's almost inevitable that the winning bid will be too high. Capen called this the winner's curse."
17- "But do some kinds of acquisitions have a greater chance of success than others? A significant number—the other 36 percent were profitable, and they turned out to have a few things in common. The buyer could identify clear and immediate gains. rather than pursuing vague or distant benefits. Also, the gains they expected came from cost savings rather than revenue growth. That's a crucial distinction, because costs are largely within our control, whereas revenues depend on customer behavior, which is typically beyond our direct control."
18- "The real curse is to apply lessons blindly, without understanding how decisions differ. When we can exert control, when we must outperform rivals, when there are vital strategic considerations, the greater real danger is to fail to make a bold move. Acquisitions ah ways involve uncertainty, and risks are often considerable. There's no formula to avoid the chance of losses. Wisdom calls for combining clear and detached thinking—properties of the left brain—with the willingness to take bold action—the hallmark of the right stuff."
19- "Starting a new business involves many of the same elements we have seen in other winning decisions: an ability to distinguish between what we can control and what we cannot; a sense of relative performance and the need to do better than rivals; the temporal dimension, in which decisions do not always produce immediate feedback; and an awareness that decisions are made in a social context, in which leaders sometimes need to inspire others to go beyond what may seem possible. Together, these elements help new ventures get off to a winning start."
20- "To make great decisions, we need above all to develop the capacity to question, to go beyond first-order observations and pose incisive second-order questions. An awareness of common errors and cognitive biases is only a start. Beyond that, we should ask: Are we making a decision about something we cannot control, or are we able to influence outcomes?...Are we seeking an absolute level of performance, or is performance relative?...Are we making, a decision that lends itself to rapid feedback, so we can make adjustments and improve a next effort?...Are we making a decision as an individual or as a leader in a social setting?...Are we clear what we mean by overconfidence?...Have we given careful thought to base rates, whether of the larger population at a point in time or historical rates of past events?...As for decision models, are we aware of their limits as well as strengths?...When the best course of action remains uncertain, do we have a sense of on which side we should err?"
21- "In his profile of longtime St. Louis Cardinals manager Tony LaRussa, Buzz Bissinger wrote that a baseball manager requires "the combination of skills essential to the trade: part tactician, part psychologist, part river-boat gambler." That's a good description for many kinds of strategic decision makers. The tactician plays a competitive game, anticipating the way a given move may lead to a counter-move and planning the best response. The psychologist knows how to shape outcomes by inspiring others, perhaps by setting goals or by offering encouragement or maybe with direct criticism. The riverboat gambler knows that outcomes aren't just a matter of cold numbers and probabilities, but that it's important matter of cold numbers and probabilities, but that it's important to read an opponent so as to know when to raise the stakes, when to bluff, and when to fold. Winning decisions call for a combination of skills as well as the ability to shift among them. We may need to act first as a psychologist, then as a tactician, next as a riverboat gambler, and perhaps once again as a psychologist. In the real world, where we have to respond to challenges as they arise, one skill or another is insufficient; versatility is crucial Even then success is never assured, not in the competitive arenas of business or sports or politics. Performance is often relative and consequences of failure are harsh. A better understanding of decision-making, however, and an appreciation for the role of analysis as well as action, can improve the odds of success. It can help us win."
This book explores some important distinctions to consider before agreeing wholeheartedly to behavioural economics.
I find it thought provoking and it allows me to appreciate the nature of 'experiments' conducted in the name of behavioural science.
specifically the 4 areas that we need to reconsider: (1) is the outcome independent or dependent on effort? (2) is performance absolute or relative? (3) is result real time or lagging? (4) is the decision to be made by a leader (who has to consider other social contexts) vs individual?
Like the previous book, this books challenges us to consider the entire picture/ population vs zooming into a specific result and draw premature conclusion from there.
Libro molto interessante che cerca di contestualizzare le ricerche comportamentali nell’ambito delle decisioni aziendali. Negli ultimi venti anni il cosiddetto “behavioral economics” si è imposto come una spiegazione spesso semplicistica: l’uomo è diventata un essere irrazionale vittima di innumerevoli “bias”. Secondo l’autore questa lettura non può però applicarsi a decisioni su aspetti verso i quali si possiede un’influenza. Il leader che decide la strategia aziendale, il commerciale che decide il valore di un’offerta, l’imprenditore che fonda una nuova impresa, sono tutte persone che decidono in ambiti che possono in parte influenzare. Ed è qui che rintraccia la necessità di utilizzare “left brain”, quindi analisi, e “right stuff”, l’intuizione che alla fine può portare al successo.
3.8/5 - By now I love Phil Rosenzweig, I hope he keeps writing books. I love his data-driven exploration of fallacies in thinking about business and decision making. I didn’t like this as much as The Halo Effect but it was still great. The last chapter or a summary is worth revisiting down the road.
Central idea in this book is that winning decisions combine two very different skills which are referred to as the left brain, right stuff.
Left brain is shorthand for a deliberate and analytical approach to problem solving. The right stuff is about the intelligent management of risk. These may seem like opposites, but they’re complementary. For many decisions they’re both essential, left brain logic on it’s own is not enough.
To use the right stuff summon high levels of confidence, even levels that might seem excessive, can be very useful to achieve high performance, particularly if you have any potential control over the outcome. People when asked questions like how long is the Nile and are unsurprisingly frequently wrong, the ranges they provide is far too narrow. This leads to the argument that they are overconfident and need to be cautious to avoid overconfidence in business. However, this is incorrect to apply this observation to business, they are simply applying too much precision. When it comes to managerial decisions, what seems excessive by one definition can be useful, or even necessary by another.
Overconfidence has been used to mean three very different things, which they call overprecision,overestimation, and overplacement (ability to perform better than others). It is often wrong to blame overconfidence as cause of downfall as often factors are outside control of that person, if it went well you would just think they are appropriately confident
Thinking you will putt a golf ball and having that belief increases chances it will happen, positive mindset important in sports.
Illusion of control, is useful to bear in mind particularly if you truly have no minimal control over an activity or outcome. However for many activities we may have some control and the the important lesson is: we should try not to underestimate our control.
Type I error, or a false positive.
Type II error, or a false negative.
Remember to take in base rate of something when thinking about a true positive. Often the emphasis in decision research has been on avoiding Type I errors and not thinking we can do more than we truly can. But if we can take action to influence outcomes, the more serious mistakes may be Type II errors. We should make every effort to influence what we can.
In terms of competitions be sure to know if you are competing for winner takes all or not. As a rule, the greater the skew towards winner takes all the more important it is to outperform rivals and the more extreme chances you’re best to take.
Bias to action at times is a problem, however perhaps the failure to act is a greater sin than taking action and failing, because action brings at least a possibility of success, whereas inaction brings none.
Aim for what the psychologist Martin Seligman calls learned optimism. The key is to replace a static view, which assumes a single mind-set at all times, with a dynamic view, which allows for the ability to shift between mind-sets.
Before an activity, it’s important to be objective about our abilities and about the task at hand. After the activity, whether we have been successful or not, it’s once again important to be objective about our performance and to learn from feedback. Yet in the moment of action, a high degree of optimism even what may seem excessive is essential for high performance.
The aim is to shift from one mind-set to another, gaining the benefits of deliberate thinking, but then shifting completely to implementation.
Authenticity is perhaps over rated as a quality in a successful leader as it is too subjective. In an attempt to be objective you can use predictive models, however be sure not to rely on these if you can influence the outcome.
We must consider not only the dangers of paying too much—a Type I error—but also to the consequences of failing to push aggressively—a Type II error.
The real curse is to apply lessons blindly, without understanding how decisions differ. When we can exert control, when we must outperform rivals, when there are vital strategic considerations, the greater real danger is to fail to make a bold move.
Acquisitions always involve uncertainty, and risks are often considerable. There’s no formula to avoid the chance of losses. Wisdom calls for combining clear and detached thinking, properties of the left brain, with the willingness to take bold action – the hallmark of the right stuff.
A decent read with some interesting perspective. A few of the chapters felt random and meandering rather than organized and cohesive (e.g. talking about base rates but then meandering to being persistent and breaking down barriers). I liked the perspective on the difference between type 1 and type 2 errors, taking action quickly, and startup mentality.
Central idea: real-world decisions demand the combination of left brain analysis (eg careful analysis and management of risk) and right stuff ambition (willingness to step into the unknown) (p18)
Results from carefully designed lab-experiments have some merit, but cannot always be transferred to messy, real world. Also, in many experiments, respondents have no influence.
p31 Illusions can be healthy, when we can influence outcomes. Unrealistic optimism brings a variety of benefits (Shelley Taylor & Jonathan Brown). Don't underestimate our ability to shape and transform
p43 Decision research has warned us against the illusion of excessive control, but: for activities in whcih we can influence outcomes, we shoudl be sure not to underestimate control.
p135 Type I error of commission (err on the side of taking action) vs Type II error of ommission. Depends on circumstances which type of error you'd prefer p231; When performance is relative (not absolute) and payoffs are highly skewed (eg winner takes all), it might be better to make Type I errors (too much action).
p161 Staw & Ross: managers place more importance on consistency than lay-people
I am a big fan of Rosenzweig's prior book The Halo Effect. To me, it's an important criticism of ex post facto management texts that trumpet the virtues of successful companies. Left Brain continues with this theme of questioning what we think we know about decision making.
Rosenzweig has a knack for letting the reader understand areas that many books of this ilk overlook-or fail to adequately address. For instance, laboratory experiments might tell us something about individual decision making in isolation, but how many of these decisions are made in the business world? Very few.
Rosenzweig challenges the reader to think about the context of real-world decisions, and this is much messier than the aforementioned experiments. Is the reward based upon relative or absolute performance? What is the relationship between the two? Are the payoffs distributed or even known? Is it "winner take all" or is second prize a set of steak knives? Can the decision maker influence performance (as in business) or not (as in the lottery)? When is it wise to adopt an ostensibly "irrational" strategy?
What's more, Rosenzweig's storytelling is fascinating. He weaves his framework into interesting narratives of actual events (business, NASA).
Readers looking for simple five-point checklists will find this book wanting. Good. Success in life and business hinges upon much more than pithy bromides. It's much more nuanced, and this is challenging book is a breath of fresh air.
Left Brain, Right Stuff (2014) by Phil Rosenzweig looks at how people make decisions, how psychological tests are not like the real world and how we predictions about things we can influence and can't influence are completely different. A lot of people would be familiar with the Dunning-Kruger effect and surveys that show that a huge majority of people think they are a better than average driver. It's little remarked that for other questions such as are you good at drawing a majority of people will answer no. When it comes to asking people how good they are at complex tasks they don't often perform a majority of people state that they are worse than average. The book also looks at how confidence in your own judgement and the determination to see things through rather than looking at overconfidence as always evil is important. Rosenzweig looks at how the team that brought the Apollo 13 astronauts home worked. It was about doing things as well as possible and there was no realistic assessment of the odds involved. The book isn't bad. It makes the point well that what Psychologists test in a lab is very different to the way people make decisions in the real world and how some views that are in a lot of popular fiction don't lead to great decisions.
I liked the author's other book, The Halo Effect, probably because it echoed what I felt (someone agreeing with one's own views is not a prudent reason to like them). And once again, I probably enjoyed this book because I'd been having similar thoughts when reading other decision books.
In general, I am a fan of using models and taking into feedback especially in repeated scenarios and I encourage my clients to at least attempt to be aware of their emotions and biases. But I'm also conscious that with many of the complex business decisions that we face that involve many players, including competitors, and have a high degree of uncertainty and unknowns, that it is difficult to use models or standard forms of analysis.
The author points out the difference between the decisions for which we have little or no control over the post-decision period and those for which we are responsible for the execution. He highlights that having involvement in the execution of the decision makes a difference to the decision itself and how we communicate that decision. As a leader, sometimes we need to be more optimistic than the likelihood of success suggested by the figures.
Rosenzweig's previous book, The Halo Effect, was a great and well-researched debunking of common business-book wisdom about best practices. It exposed the futility of a perennial project to determine a formula or procedure that businesses can follow to guarantee success.
In this book Rosenzweig takes on another modern sacred cow: the volume of research related to decision-making and cognitive bias. He argues that while there's much of value in the experiments that show we do not make decisions optimally, that we're innumerate, and that we suffer from well-defined biases... the conditions under which these experiments are run do not really bear much resemblance to the conditions real-life executives face when making important strategic decisions. This means that they will at best tune out valuable research insights, since those insights are packaged in a way they correctly find specious, or they'll internalize the wrong lessons.
First there were books born of rigorous research like Good To Great, then behavioral research illuminated figures like Daniel Kahneman and concepts like the survivorship bias and overconfidence, and now this book has advanced knowledge even further.
The great thing that this books points out is that all the findings based on those inherent human biases from nice, controlled laboratory experiments don't translate to the complex decisions we face in real life. The cognitive bias research itself was biased.
In the real world taking an analytical, un-emotional, rational approach (left brain stuff) to problems can be beneficial, but passion, confidence, and even a little bit of bias (the right stuff) can also have a profound influence on outcomes.
I already knew most of the ideas in the book (a lot are common sense indeed) but Phil Rosenzweig still managed to bring me new insights and advice that I should follow to improve my way of dealing with business and teams. I think the book has brought me valuable information and hopefully it will have tuned something in my brain (be it left or right) that will add value longer term. The only nagative comment is that the ideas in the book are too spread out. The chapters are very long and dwell too much on the same idea. I found myself skimming some parts, and thinking "OK, I got the idea, no need to spread it over so thinly, next please!". I guess you get the meaning. A great book for managers and business people and my last piece of advice: take notes at you read through it!
Desde Leader Summaries recomendamos la lectura del libro Cerebro izquierdo y lo que hay que tener, de Phil Rosenzweig. Las personas interesadas en las siguientes temáticas lo encontrarán práctico y útil: habilidades directivas, analizar y tomar decisiones, y estrategia y modelos de negocio. En el siguiente enlace tienes el resumen del libro Cerebro izquierdo y lo que hay que tener, Cómo tomar decisiones acertadas: Cerebro izquierdo y lo que hay que tener
Follow Up to The Halo Effect. Good take on where biased decision making falls short in complex decisions made under uncertainty. A lot of it was redundant from other reading, but appreciate repetition sometimes for affirming fact and considered opinions/philosophies.
The book going over so much ground I was already familiar with made it less impactful for me though.
Liked the book and functioned as a good counterweight to what I saw as overly simplistic and Pollyannaish thinking from Business School classes.
Great book, but it felt like the title including words Left Brain/Right Brian threw off my expectations. It's a wonderful book about how we make decisions, and dispelling some common myths about overconfidence, assuredness, and bias.
I had a hard time putting it down, and saw a lot of the behaviors described in some of the organizations I work with.
Not so much about how to make a right decision, but more to show what the territory around decision-making is like. What to avoid, how to avoid it and what to look out for. Particularly, areas where we should be making less decisions, and areas where we should be making more.
how to use your analytical left brain and the emotions from the right brain, which the author refers to as the 'right stuff' to avoid overconfidence when making decisions.