More on this book
Community
Kindle Notes & Highlights
Read between
July 2 - December 5, 2020
Then, with your value premise in place, you would argue that your side best serves the preeminent value.
Then I discovered utilitarianism, the philosophy pioneered by Jeremy Bentham and John Stuart Mill, British philosophers of the eighteenth and nineteenth centuries.* Utilitarianism is a great idea with an awful name. It is, in my opinion, the most underrated and misunderstood idea in all of moral and political philosophy.
Reasoning, as applied to decision making, involves the conscious application of decision rules.
we see dual-process brain design not just in moral judgment but in the choices we make about food, money, and the attitudes we’d like to change. For most of the things that we do, our brains have automatic settings that tell us how to proceed. But we can also use our manual mode to override those automatic settings, provided that we are aware of the opportunity to do so and motivated to take it.
experience comes in three forms, based on three different kinds of trial and error. First, our automatic settings may be shaped by our genes.
Second, our automatic settings may be shaped by cultural learning, through the trials and errors of people whose ideas have influenced us.
Finally, there’s good old personal experience,
getting smart requires three things. First, it requires the acquisition of adaptive instincts—from
Second, getting smart requires a facility with manual mode,
Third, it requires a kind of metacog...
This highlight has been truncated due to consecutive passage length restrictions.
How can we avert the Tragedy of Commonsense Morality?
If we are to avert the Tragedy of Commonsense Morality, we’re going to have to find our own, unnatural solution: what I’ve called a metamorality,
one philosophical and one psychological. What’s more, these two solutions turn out to be the same solution, a remarkable convergence.
Utilitarianism is a splendid idea, and it is, I believe, the metamorality that we modern herders so desperately need.
I don’t think that happiness is the one true value. Instead, what makes happiness special—and this is Bentham and Mill’s real insight, in my opinion—is that happiness is the common currency of human values.
It seems that Mill’s “higher pleasures” are pleasures derived from activities that build durable and shareable resources. This opens up a more principled utilitarian argument in favor of Mill’s “higher pleasures.”
This is mostly a verbal problem. We can say that happiness is different things for different people, but that’s needlessly confusing. It’s clearer to say that happiness is the same thing for everyone, and that different people are made happy and unhappy by different things. Two kinds of ice cream does it for me, but not for you, and so on.
our moral instincts reliably guide us toward the greater good, then why bother with moral philosophy, utilitarian or otherwise? Here it’s important not to confuse our two tragedies. Once again, our moral instincts do well with the Tragedy of the Commons (Me vs. Us), but not so well with the Tragedy of Commonsense Morality (Us vs. Them). The utilitarian thing to do, then, is to let our instincts carry us past the moral temptations of everyday life (Me vs. Us) but to engage in explicit utilitarian thinking when we’re figuring out how to live on the new pastures (Us vs. Them).
This would give us a different kind of common currency: facts about which rights exist and their relative priorities and weights.
no one thinks that moral facts are mathematical facts, to be worked out through calculation; rather, the idea is that the moral facts are like mathematical facts, abstract truths that we can work out if we think sufficiently hard, objectively, and carefully.
Are bad things bad because God disapproves of them, or does God disapprove of them because they’re bad?
The fundamental problem with modeling morality on math is that, after centuries of trying, no one has found a serviceable set of moral axioms, ones that (a) are self-evidently true and (b) can be used to derive substantive moral conclusions, conclusions that settle real-world moral disagreements.*** Now, you may think it’s obvious that morality cannot be axiomatized, and that morality is therefore not like math.
The Happiness Button. Next week, you will accidentally trip on an uneven sidewalk and break your kneecap. This will be extremely painful and will significantly reduce your happiness for several months. However, if you press this button, a little bit of magic will make you more attentive as you’re walking along, and you won’t break your kneecap. Will you push? Of course you will. This tells us something rather obvious: If all else is equal,* people prefer being more happy to being less happy. Next question.
If we drop the “if all else is equal” qualifier, we get utilitarianism. We get a complete moral system, a metamorality that can (given enough factual information) resolve any moral disagreement.
Utilitarianism makes sense to everybody because all humans have more or less the same manual-mode machinery. This is why utilitarianism is uniquely suited to serve as our metamorality, and why it gives us an invaluable common currency.
Thus, your general-purpose action planner is, by necessity, a very complex device that thinks not only in terms of consequences but also in terms of the trade-offs involved in choosing one action over another, based on their expected consequences, including side effects.
If there are no power asymmetries, an equal division is the only stable solution. In other words, what we would call a “fair” distribution of resources naturally emerges among people—even people who don’t care about “fairness”—when there is no power imbalance.
This is one way to get utilitarianism’s first essential ingredient, impartiality.
however imperfectly, to produce consequences that are optimal from an impartial perspective, giving equal weight to all people.
Utilitarianism can be summarized in three words: Maximize happiness impartially. The “maximize” part comes from the human manual mode, which is, by nature, a device for maximizing. This, I claim, is universal—standard issue in every healthy human brain.
Happiness—yours and that of others—might not be the only thing that you value intrinsically, as an end in itself, but it’s certainly one of the primary things that you value intrinsically. This, too, I claim, is universal,
The manual mode doesn’t come with a moral philosophy, but it can create one if it’s seeded with two universally accessible moral values: happiness and impartiality. This combination yields a complete moral system
Utilitarianism may not be the moral truth, but it is, I think, the metamorality that we’re looking for.
By accommodation, I mean showing that maximizing happiness does not, in fact, have the apparently absurd implications that it seems to have. In