More on this book
Community
Kindle Notes & Highlights
Read between
July 2 - December 5, 2020
This tells us that intuitive judgments come first, and that the doctrine is just an (imperfect) organizing summary of those intuitive judgments.
our brains have a cognitive subsystem, a “module,” that monitors our behavioral plans and sounds an emotional alarm bell when we contemplate harming other people.** Second, this alarm system is “myopic,” because it is blind to harmful side effects.
this general ability to dream up ways of achieving distant goals came with a terrible cost. It opened the door to premeditated violence.
this system should respond most strongly to simulating violent actions oneself, as opposed to watching others simulate violence or simulating physically similar, but nonviolent, actions oneself.
what determines whether this alarm system sounds the alarm? We know that our judgments are, at least sometimes, sensitive to the means/side-effect distinction. And yet, our judgments are not always sensitive to the means/side-effect distinction,
The missing ingredient is personal force.
the system that sounds the emotional alarm is supposed to be a relatively simple system,
this action-plan inspector is a relatively simple, “single-channel” system that doesn’t keep track of multiple causal chains.
These two systems interact as follows. When the emotional alarm is silent, the manual mode gets its way (rows 1 and 2). But when the emotional alarm goes off, the manual mode’s reasoning tends to lose (rows 3 and 4).
representing a specific goal-directed action, such as choosing a blue mug, is a fairly basic cognitive ability, an ability that six-month-old infants have. But representing an omission, a failure to do some specific thing, is, for humans, a less basic and more sophisticated ability.
it appears that humans find it much easier to represent what one does rather than what one doesn’t do. And that makes sense, given that in real life, it’s more important to keep track of the relatively few things that people do, compared with the millions of things that people could do but don’t.
The hypothesis, then, is that harmful omissions don’t push our emotional moral buttons in the same way that harmful actions do. We represent actions in a basic motor and sensory way, but omissions are represented more abstractly.
this difference in how we represent actions and omissions has nothing to do with morality; it has to do simply with the more general cognitive constraints placed on our brains—brains
To say that this automatic alarm system responds to violence probably gets things backward. Rather, I suspect that our conception of violence is defined by this automatic alarm system.
to harms caused using personal force not because personal force matters per se, but because the most basic nasty things that humans can do to one another (hitting, pushing, etc.) involve the direct application of personal force.
But it’s certainly important to distinguish harms that are specifically intended from harms that are unforeseen side effects—that is, accidents. Someone who harms people by accident may be dangerous, but someone who specifically intends to harm people as a means to his ends is really dangerous. Such people may or may not be more dangerous than people who knowingly cause harm as collateral damage.
Law makes the distinction between voluntary and involuntary. Also judge events's evidence not personality. X did not X is.
It’s a mistake to grant these gizmos veto power in our search for a universal moral philosophy.
But as a real person with limited time, money, and willpower, trying to maintain a physiologically optimal diet is not, in fact, optimal. Instead, the optimal strategy is to eat as well as you can, given your real-world constraints, including your own psychological limitations and including limitations imposed on you as a social being. This is challenging because there’s no magic formula, no bright line between the extremes of perfectionism and unbridled gluttony.
moral monster.
morally abnormal,
the ideal utilitarian punishment system is one in which punishments are convincingly faked rather than actually delivered. In an ideal utilitarian world, convicts would be sent to a happy place where they can’t bother anyone, while the rest us believe that they’re suffering, the better to keep us on our best behavior.
I am not claiming that utilitarianism is the absolute moral truth. Instead I’m claiming that it’s a good metamorality, a good standard for resolving moral disagreements in the real world. As long as utilitarianism doesn’t endorse things like slavery in the real world, that’s good enough.
A pragmatist needs an explicit and coherent moral philosophy, a second moral compass* that provides direction when gut feelings can’t be trusted.