Gregg Sapp's Reviews > Moral Tribes: Emotion, Reason, and the Gap Between Us and Them
Moral Tribes: Emotion, Reason, and the Gap Between Us and Them
by
by

Darwin saw evolution by natural selection as a quintessentially competitive process. Thus, the existence of cooperation, especially genuine altruism and compassion, was perplexing to him. Why do self-interested individuals sacrifice their own prerogatives for some greater good or to go out of their way to assist another? Well, it turns out that cooperation is not necessarily done out of love or charity. Most often, those who cooperate with others expect something in return.
As Joshua Green, a Harvard psychologist, discusses extensively in “The Moral Tribes,” it is from the internal balancing act between acting out of pure self-interest while conforming to group expectations (and thus currying group advantages) that the concept of morality arises. There’re two levels of relationships involved – me versus us, and us versus them. In the first, obviously, we manage our own desires in ways that (mostly) keep us in the good graces of those closest to us, whose aid and support we need. The second, somewhat more complex, is based upon the assumption that, from an evolutionary perspective, we are programmed to cooperate, but only with people like ourselves. The moral consequence is that we become intolerant of, even adversarial with folks who aren’t members of our tribe. This enmity is exacerbated by variations in belief systems.
How to resolve moral disputes between tribes? The “meta morality” proposed by Green is based upon Mill and Bentham’s philosophy of utilitarianism, which he re-brands as “deep pragmatism.” In a nutshell, moral choices are those which promote the greatest good – or, in this case, the most happiness. Supporting this basic insight, he draws upon experimental (problems-based), observational (brain scanning), and theoretical (evolutionary psychological) arguments, each of which has limitations, but, collectively, provide a broad tapestry of evidence for the efficacy of utilitarian principles. We have moral instincts, hard-wired, that enhance survival potential. But we also possess the capacity for reason, which, applied pragmatically, can override those automatic settings in ways that enable us to calculate moral decisions by which we all can get along.
Green concedes the point that where the “greater good” is the standard of morality, there arises a natural objection that the individual is devalued in favor of the collective. However, happiness is something that we experience, first, at the personal level, so the greater good in this case is just the sum of a lot of happy individuals. The two are thus not mutually exclusive. His contention that this provides a “common currency” for moral decision-making sounds, then, like we can almost attach a quantitative value to the assessment of moral dilemmas. That strikes me as problematic because, really - who can measure happiness, much less potential happiness? Beyond that, though, I have trouble envisioning even the most deeply pragmatic social, cultural or political system that can adjudicate the use of any common currency, let alone avoid the inevitable temptation to haggle over the price of happiness.
As Joshua Green, a Harvard psychologist, discusses extensively in “The Moral Tribes,” it is from the internal balancing act between acting out of pure self-interest while conforming to group expectations (and thus currying group advantages) that the concept of morality arises. There’re two levels of relationships involved – me versus us, and us versus them. In the first, obviously, we manage our own desires in ways that (mostly) keep us in the good graces of those closest to us, whose aid and support we need. The second, somewhat more complex, is based upon the assumption that, from an evolutionary perspective, we are programmed to cooperate, but only with people like ourselves. The moral consequence is that we become intolerant of, even adversarial with folks who aren’t members of our tribe. This enmity is exacerbated by variations in belief systems.
How to resolve moral disputes between tribes? The “meta morality” proposed by Green is based upon Mill and Bentham’s philosophy of utilitarianism, which he re-brands as “deep pragmatism.” In a nutshell, moral choices are those which promote the greatest good – or, in this case, the most happiness. Supporting this basic insight, he draws upon experimental (problems-based), observational (brain scanning), and theoretical (evolutionary psychological) arguments, each of which has limitations, but, collectively, provide a broad tapestry of evidence for the efficacy of utilitarian principles. We have moral instincts, hard-wired, that enhance survival potential. But we also possess the capacity for reason, which, applied pragmatically, can override those automatic settings in ways that enable us to calculate moral decisions by which we all can get along.
Green concedes the point that where the “greater good” is the standard of morality, there arises a natural objection that the individual is devalued in favor of the collective. However, happiness is something that we experience, first, at the personal level, so the greater good in this case is just the sum of a lot of happy individuals. The two are thus not mutually exclusive. His contention that this provides a “common currency” for moral decision-making sounds, then, like we can almost attach a quantitative value to the assessment of moral dilemmas. That strikes me as problematic because, really - who can measure happiness, much less potential happiness? Beyond that, though, I have trouble envisioning even the most deeply pragmatic social, cultural or political system that can adjudicate the use of any common currency, let alone avoid the inevitable temptation to haggle over the price of happiness.
Sign into Goodreads to see if any of your friends have read
Moral Tribes.
Sign In »
Reading Progress
September 19, 2016
–
Started Reading
November 10, 2016
–
Finished Reading
November 17, 2016
– Shelved