Lara Buchak sets out an original account of the principles that govern rational decision-making in the face of risk. A distinctive feature of these decisions is that individuals are forced to consider how their choices will turn out under various circumstances, and decide how to trade off the possibility that a choice will turn out well against the possibility that it will turn out poorly. The orthodox view is that there is only one acceptable way to do rational individuals must maximize expected utility. Buchak's contention, however, is that the orthodox theory (expected utility theory) dictates an overly narrow way in which considerations about risk can play a role in an individual's choices. Combining research from economics and philosophy, she argues for an alternative, more permissive, theory of one that allows individuals to pay special attention to the worst-case or best-case scenario (among other 'global features' of gambles). This theory, risk-weighted expected utility theory, better captures the preferences of actual decision-makers. Furthermore, it isolates the distinct roles that beliefs, desires, and risk-attitudes play in decision-making. Finally, contra the orthodox view, Buchak argues that decision-makers whose preferences can be captured by risk-weighted expected utility theory are rational. Thus, Risk and Rationality is in many ways a vindication of the ordinary decision-maker--particularly his or her attitude towards risk--from the point of view of even ideal rationality.
The premise of this book is that maximizing expected utility (EU) is not required to be rational. EU maximizers are risk-neutral, except to the extent that risk-aversion can be captured by decreasing marginal utility, and risk-seeking by increasing marginal utility. Many have noticed before hat humans in practice do not always maximize EU, and have looked for descriptive theories that describe how humans actually make choices, usually under the assumption that the choices being described are irrational. Buchak instead lays out a theory she calls risk-weighted expected utility (REU) maximization, which she argues is not merely descriptive, but rational. That is, she argues that agents maximizing REU but not EU can still be considered ideal reasoners, but with different preferences than EU maximizers.
Buchak starts by giving examples of types of risk aversion that cannot be captured by decreasing marginal utility. The most common example of this is probably the Allais paradox. She also gave another example I had never heard before and found quite interesting - an EU maximizer cannot simultaneously satisfy both of the following: R1. Whenever faced with a coin-flip between some particular sum and nothing, be indifferent between taking the coin-flip and receiving one-third of that sum for certain. R2. Whenever faced with a coin-flip between one sum of money and another which is $30 larger, be indifferent between taking the coin-flip and receiving the lesser prize plus $10 for certain. As Buchak points out, these seem to capture a similar attitude towards risk aversion - 50/50 chances are only one third as valuable as sure things. However, although there are utility functions that lead to either R1 or R2 individually, no single utility function can lead to both R1 and R2 simultaneously. According to Buchak, both "Allais preferences" (the common preferences underlying the Allais paradox) and holding R1 and R2 simultaneously can be rational. (Buchak gave a couple more examples as well, but I didn't find them as interesting.)
Ultimately Buchak failed to convince me of her thesis. This was for several reasons. First, I find the axioms underpinning EU maximization very compelling. e.g. the independence axiom in the VNM formulation, which I find obviously required for rationality, may be violated by REU maximizers. Buchak gave a weaker axiomatization which is equivalent to REU maximization. To me, the weaker axioms seemed to have bizarre and undesirable extra clauses. Second, I dislike the intuition that agents may care differently about the *worst* possible outcome (in a given gamble) than about outcomes that are merely bad. For example, consider a choice between two gambles where one is more favorable if event A occurs, the other is more favorable under event B, and the two are identical under event C. Should it matter whether the outcome under event C is better than the best possible outcome in A and B, worse than the worst, or somewhere in between? To me it seems irrelevant. Buchak argues an agent may rationally give special weight to worst possible outcome, so that if C contains the best outcome, the agent would prefer whichever gamble most improves the worst possible outcome among A and B. On the other hand, if C contains the worst possible outcome, the worst among A and B would no longer receive as much weight and the agent's preference would flip. Third, Buchak herself points out that an agent with "Allais preferences" (as well as other REU maximizers) shows a strange response to new information. Specifically, an agent may prefer gamble 1 to gamble 2, but also know that if they learned the truth value of some event E, they would then prefer gamble 2 to gamble 1, *regardless of whether E was true or false*. I think this is pretty damning of REU maximization, whereas Buchak finds it acceptable.
On the last point, Buchak agrees the behavior is unfortunate. She says, "Perhaps it is a somewhat unpalatable upshot that more information is not always better for decision-making, and that we cannot always say which of two informational positions is better even when the information in one is a superset of the information in the other. But I think it is more palatable than the claim that rational agents cannot care at all about the structural properties of the acts they are deciding among, particularly when we notice that the incomparability of informational positions arises from instrumental reasons for preference rather than from the values of the outcomes themselves. So I am willing to accept this upshot. If the reader gets off the boat here, then I have at least made it clear what tradeoffs are involved in choosing between permitting global sensitivity and prohibiting it: the bullet one has to bite in accepting global neutrality as the uniquely rational norm is to reject the reasons for caring about global properties discussed thus far, and the bullet one has to bite in accepting that global sensitivity is permissible is to reject the idea that choices are always better from a position of more information." I think this sums it up pretty well. I agree that Buchak gave better arguments in favor of risk-aversion than I expected, and I'm not completely happy rejecting them. In the end, though, we bit different bullets.
Even though I disagreed with the book's conclusion, I found reading it extremely valuable. My thinking around EU maximization is much clearer now than it was before, and I learned that EU maximization excludes some types of preferences that I didn't previously realize were excluded. I have a better understanding of what risk aversion means and to what extent it can be captured by decreasing marginal utility. And I realized that non-EU-maximizing risk aversion is more intuitively appealing to me, and has stronger arguments in its support, than I thought. Buchak said early on, and I completely agree, "Presumably, clarifying [how agents should take structural properties of acts into account] will be useful even to those who think EU maximization turns out to be the uniquely rational strategy."
Look, I won't pretend that this book is a page-turner.
However, it does do a good job of laying out the framework of risk-expected utility theory, and developing a defense of the rationality of risk aversion.
My rating is not an objective measure of the quality of the book, but rather of my interest in it. The author's thesis seems reasonable, it's just that I'm not particularly interested in taking that thesis through every step of the decision theoretic framework to compare it to expected utility theory. But I definitely like the insight that a reasonable agent might have so-called global preferences about the setup of a decision problem.