Philosophy discussion
Thought Questions
>
The Trolley Problem

The fear here is that you don't want to have others claim that you're either 1) "playing god" or 2) quantifying the value of a single life by saying it's worth less than the value of five lives.
1) If you always prioritize eliminating or mitigating damage or harm as a universal standard, then you're not playing god. You're just applying the same expected rules of behavior in all cases. Playing god would mean making exceptions for certain individuals.
2) Even if you pick saving the lives of the five over the life of the one, you're still not quantifying the value of a single life. You're simply saying that five > one, as long as you accept the premise that each life (as an object) has equal value, whether Hitler or scientist.

In the world that we live in, the problems is not to teach people to make distinctions based on finely reasoned ethical principles, or that isn't the main problem. The problem is to convince people to actually apply rather simple and well understood ethical principles, the Ten Commandments, if you will.
So if people behave honestly in business and treat people decently and don't steal or commit acts of terrorism, then hey, we're better off than we are now.
Aren't we?

Goes something like this: choose! A million people in China could die, but your loved ones here in America would be saved. Or, give the word that saves the 1m people, and your family will be forfeit instead. It is your decision. Do you let the 1m strangers die so that your family can live? After all, what's another million Chinese? They're going to die anyway in a flood or something.
There's also the 'lady and the tiger' dilemma.
The above poster is correct. These problems are fruitless. No victory is possible. No path leads to safety. The only way out is to refuse to participate in the bargain at all. Deny all the terms offered you. Dissolve the setting of the problem. Allow it to implode.

Discussions focussed on the relative uses of individuals to society and people chose in accordance with their personal interpretation. I remember being the odd one out for suggesting, much to a unified chorus of indignation, that the only "fair" method would be to draw lots.
I still believe this, as I regard all possible criteria that can in theory be consulted to solve this dilemma as "pseudo-criteria" defined by each person's individual interpretation. There are no objective criteria that help us decide things like this. Either outcome in these case studies is a tragedy, and hence specific action can only be directed by reference to an arbitrary decision process.
let's just make sure the dice aren't loaded...

They used straws in that classic B-movie, 'Five Came Back', FYI.

But this becomes very hard in real life because of the uncertainty about outcomes. First, we never have certainty that it's five lives vs. one. Plus, we don't know what will be lost by the death of the one vs the five- which is really what matters if we are trying to mitigate.
Imagine one of the five is the next Adolph Hitler. What then?
Or the one is the next Jonas Salk?
In real life, such considerations would be critical to a decision that optimizes mitigation for the larger group that we are seeking to protect. In real life, we would not limit the calculus to the six individuals whose lives hang in the balance, but we would consider the welfare of the community. It is the long term welfare of the community that drives us to mitigate in the first place.

So they can only decide based on what he actually is: a child.
Your other remarks: good.




ha ha ha "I hate kids"... :-) I totally agree with your sentiment but of course the evil professors at B-school put that one into the mix deliberately, to encourage a natural emotive response. Even though it feels "righter" (and it does), it's still strictly speaking bogus. If the kid goes on to live a good life, cool. But what if she allows to radicalise herself and throws a bomb into a department store. Then the same standards that made us pick the kid in the first place condemn our choice as wrong. Bummer.

But you cant do that, Feliks. There are no criteria that you can objectively measure. What if the kid is A. Hitler? What if he's not? What if the old woman is Mother Theresa? What if she has a brilliant mind and will, if saved, find a cure for cancer? What if the stockbroker is the most astonishing altruist in the world and will, if he lives, channel his $200m bonus into good causes? What if the doctor is a dentist - that is bad, just to be clear :-)


To answer your objection:
Mark wrote: "But you cant do that, Feliks. There are no criteria that you can objectively measure. What if the kid is A. Hitler? What if he's not?..."
That is why you *have to* do as I suggested. You're correct that you can't measure the magnitude of good or evil in a closed set. Just so. Therefore you have to dismiss it as conjecture--and remain white-knuckled to exactly the most pragmatic route.
We ought not let worry about 'the ends' skew what we know are the best 'means to an end' we typically follow. Like Heinlein said: 'Never let your morals prevent you from doing what's right'
p.s. if it comforts you any, Karl Marx was dead-set against 'good of the majority' scenarios like this even though his philosophy countenanced it in essentially all other respects. He objected to *any* choice being made for others, by the people at the top of the class structure. Who are they to decide? he wanted to know

Chris,
in another thread we discussed utilitarianism. You said you need to consider all possible consequences of an action. Of course it is impossible to do that but let us just consider one possible scenario here: The 5 dudes you save are all mass-murderers. The 1 dude sacrificed is just some normal guy. Clearly, in this scenario you'd lower the total "goodness" or "happiness" (or whatever you wish to call this elusive quality) in the world. And if at least one scenario exists that would bring this about, it is no longer clear whether selective killing enhances the common weal...


Without the action/inaction distinction, you are committing a wrong by keeping any of your disposable income, or spending anything on amusements. Surely you could give that money to a cause which would help feed starving people. If fact, the standard you are holding up means that almost everyone is committing horribly immoral inactions all of the time.

I disagree with the opinion that inaction is morally neutral. Let's imagine that you are walking down the street, and that I see you walking. As I watch a figure steps out behind you and starts to level a gun at you. I could save your life by yelling.
In that situation, am I not morally compelled to act?

As for the trolley, attempting to "mitigate" assumes that you can fairly weigh dissimilar values against each other. It seems to me that the only fair valuation is no valuation. That would make the value of one life zero. It therefore does not matter how many people are going to die, you are simply wrong either way.

I agree that there might be systems that show an ethical difference between the two situations, but I don't think the system you have drawn hits the mark.



Let me ask two slightly different questions. Suppose instead of five people on the track, there was only one. Do you think in that case that it would be OK to flip a coin? Second question, if the choice were to let the five people die or throw yourself in front of the train to derail it and save the five, do you think it would be immoral not to throw yourself in front?
And one final question, suppose the five were parents and grandparents of the one. Would the wishes of any of the six of them matter, since the outcomes would still be the same?

Chris, would you be able to do that for us? Let us assume there is one parent involved, and that's it. I must admit, I would love to see felicific calculus in action.
P.S. - Jeremy Bentham of course invented this concept, and you have used it consistently in your contributions as if it was an objective tool that you are able to employ. Jeremy said he would derive an answer from his algorithm that will denote an action's "good tendency" or "evil tendency". His explanation implies that "good" or "evil" will become apparent by observing the signage of the number resulting from the calculation (positive for good, negative for evil) and it is clear from his instructions that he believed it is possible to assign exact values to the components of his algorithm.
You said it was "fairly easy" to work out the felicific calculus. Would you demonstrate this? And could you share how you got to the number that is the result of your calculations? I have long wondered how to apply this form of calculus, and have never seen an example of it.

One reason is that a specific action will cause specific pleasure/pain, rather than a hedonistic response that conforms to a distribution of all possible actions that can act on a person.
The other reason is similar. You cannot apply statistical averages to specific individuals. For example, an actuarial table might tell me that the probability of death for a male aged 50 before his 51st birthday is 0.47888%. But this probability will not apply to Peter, who is 50. Peter's death is binary. It either happens or it doesn't. Statistics is a science of large numbers (large enough to give you a sufficiently homogeneous sample).
So I applaud your creativity, and your desire to cut through the elusiveness of ethics with quantitative methods, but I am afraid this is not possible, Chris. And it is not possible in principle, not because we haven't got enough data or cannot gather them. Bentham would have agreed with me, I think. There is a reason that he never even wrote down his algorithm. And it is because he knew full well he had no leg to stand on. If you haven't read the passage yet where he talks about this, do check it out - you will find it most wonderfully vague... :-)

Sadly, without omniscience utilitarianism is little more than an exercise in procrastination. I can demonstrate this statement simply by going back to the basic trolley problem. Do nothing five people die, throw the switch and one person dies. You only know the counts. You have one second to decide...
If you asked a question five people are dead.

And that is the problem with utilitarianism, it is morality in retrospect. To make a moral decision you must predict the outcome of that decision. However, the universe will always side with the Law of Unintended Consequences, so you are constantly evaluating past decisions against present information, without clear knowledge of future information. Today it was moral. Tomorrow it will be immoral. Next week....


No Chris. It does not. The life office does not only sell one policy. It sells a batch of policies which is large enough for actual mortality experience in the life fund to match the statistical assumptions in the pricing basis.
On a lighthearted note - I am amazed that we were able to move from ethics to actuarial principles in this thread. That must be a world first!

Sadly, without omniscience utilitarianism is little more than an exercise in procrastinat..."
Agreed on Bentham. He makes a lot more sense now that he's in the box. But they replaced his actual head with a replica. You can still see his head, but you need to be granted special permission by the bursar of UCL.

Just so. Even Bentham appreciated this issue. So he coined the terms fecundity and purity, which sort of capture these n-th order effects. I must admit I am astounded that people can talk about "felicific calculus" with a straight face.
But it is certainly amusing to think about what Jeremy's algo might have looked like, had he been bold enough to write it down. There is also proprinquity, the delay to the onset of pleasure. I wonder how "pleasure" or "happiness" responds to the passage of time. Can we model it using thermodynamics (heat dispersion)? Is there a time value to happiness - in other words, is there happiness decay?
And of course, is happiness constant in the universe? If it is a zero-sum gain, collective happiness can only increase at the local level (like entropy only decreases at the local level)? That would be a bummer for utilitarianism.


I was, of course, entirely tongue-in-cheek. My incredulity is stirred by the fact that Bentham apparently genuinely believed that happiness is a calculable quantity. My term "happiness decay" is a pun on "time decay" used in option pricing, and my quip on heat dispersion plays on the similarity between dynamics in financial options and heat transfer.
Apologies, I am a lighthearted guy... All I meant to highlight is that surely we can agree that happiness (or ethics) are not tangible quantities that lend themselves to the application of quantitative methods? As an illustrative concept, utilitarianism has value, it seems to me, but attempting to calculate values and to model the "behaviour" of these concepts very much feels like re-opening a discussion of how many fairies can dance on the tip of a pin. It seems to me, at least...

Sadly, without omniscience utilitarianism is little more than an exercise in p..."
Yep, I know. Last I heard his original head is kept on a bowl, between his feet. That way you don't see it through the window. "Present, but not voting." I wonder how they decide who has to sit next to him at the meetings.

Preventing Japanese forces from utilizing the oil resources of French Indochina by training the locals in guerilla warfare, and then the Vietnam War.
Or
Providing weapons to Afghanis to fight the pinko hordes, and then...well were still living through that.
The LoUC is simply a grand statement of the obvious. "You aren't as smart as you think you are." Isn't that also the big lesson of history?
IMO saving five people is obvious choice in trolley problem - as yes, there can be murder by inaction.
It is different from organ problem in that you are not forcing the single victim into situation, the six persons on tracks are already there and you are just to guide (action or inaction) the trolley to one of tracks. You won't force the trolley victim to die, he may still run away but in organ problem you will be forcing him.
The 'organ problem' equivalent for trolley problem would be where the sixth person was standing outside, near you and your option was to throw him on the track. Throwing an innocent into a situation is not okay but where innocents are already in there and you are forced to choose, you sure should choose lesser of evils.
In organ problem sixth person is not one of original targets but in trolley problem he is one of two targets by fact of being there on tracks.
The pilots of planes about to crash can turn to villages but this doesn't mean they are killing villagers just that they are turning the unavoidable destruction to less crowded area. Trolley problem is same. The sixth person just happened to be in that place. In organ problem, you are actually 'creating' the village.
Also, in trolley problem the action of saving 5 is all you do; death of 6th is side effect that succeeds it. While murder of healthy person in organ problem proceeds the saving of souls.
In cave problem, I will save myself :-) but If I wasn't in there and option was among others, I would choose the kid. Not that we have moral right to chose between lives but we have to do or we can't save any.
The kid is after all a kid, we (as part of society) owe it its safety - even if it grows to be a Hitler. Innocent till proven guilty. Innocent till crime is committed. Choosing say the doctor may pay more but here we (again as society) are first to do our duty rather than start looking for gains; unless of course the woman is beautiful in which case .....
So even in trolley problem, I wouldn't change tracks if the 6th person was kid while others were grown ups.
It is different from organ problem in that you are not forcing the single victim into situation, the six persons on tracks are already there and you are just to guide (action or inaction) the trolley to one of tracks. You won't force the trolley victim to die, he may still run away but in organ problem you will be forcing him.
The 'organ problem' equivalent for trolley problem would be where the sixth person was standing outside, near you and your option was to throw him on the track. Throwing an innocent into a situation is not okay but where innocents are already in there and you are forced to choose, you sure should choose lesser of evils.
In organ problem sixth person is not one of original targets but in trolley problem he is one of two targets by fact of being there on tracks.
The pilots of planes about to crash can turn to villages but this doesn't mean they are killing villagers just that they are turning the unavoidable destruction to less crowded area. Trolley problem is same. The sixth person just happened to be in that place. In organ problem, you are actually 'creating' the village.
Also, in trolley problem the action of saving 5 is all you do; death of 6th is side effect that succeeds it. While murder of healthy person in organ problem proceeds the saving of souls.
In cave problem, I will save myself :-) but If I wasn't in there and option was among others, I would choose the kid. Not that we have moral right to chose between lives but we have to do or we can't save any.
The kid is after all a kid, we (as part of society) owe it its safety - even if it grows to be a Hitler. Innocent till proven guilty. Innocent till crime is committed. Choosing say the doctor may pay more but here we (again as society) are first to do our duty rather than start looking for gains; unless of course the woman is beautiful in which case .....
So even in trolley problem, I wouldn't change tracks if the 6th person was kid while others were grown ups.
It sounded much clear in my mind but probably my comment was a little too messy to be ubderstood:
1. Try looking at it in this way: when someone is in a mine and is standing on tracks, he is running a potentional risk of being run over by trolley anyway. Such accidents may occur anyway.
And the person on the button is only doing his duty in switching the trolley to a less deadly track. It might be a part of his job to take care as to where trolley is headed.
On the other hand, I don't think any of us pretend to be running a risk of being stabbed by doctor when we go to visit him for tests. Nowhere is doctor given a right to kill a patient without his choice for whatevver reasons. It is not a part of doctor's duty to see to that one of his patient is poisioned or stabbed ... or whatever other form he might be using.
2. I dont think I ever said that, since I agree it would be irrelevant. Where I pointed out it doesn't matter what kid grow up to be; I was pointing out that future actions are irrelevant.
A. I'm not supposing that. What I meant was that you are not the one who denied him the choice. Again mea culpa as far as misunderstanding is concerned.
In organ problem, you are taking away a patient's right to choose away from situation.
B. Here, I was changing trolley situation to find a equivalent for doctor problem.
In the new trolley problem that I assuned, there was no switch. The only way to save the 5 persons was to throw a person standing next to you (presumed to be fat) and thus block the trolley before reaches 5 people. It is a popular alternative scenerio. I thought it would be wrong to throw him.
1. Try looking at it in this way: when someone is in a mine and is standing on tracks, he is running a potentional risk of being run over by trolley anyway. Such accidents may occur anyway.
And the person on the button is only doing his duty in switching the trolley to a less deadly track. It might be a part of his job to take care as to where trolley is headed.
On the other hand, I don't think any of us pretend to be running a risk of being stabbed by doctor when we go to visit him for tests. Nowhere is doctor given a right to kill a patient without his choice for whatevver reasons. It is not a part of doctor's duty to see to that one of his patient is poisioned or stabbed ... or whatever other form he might be using.
2. I dont think I ever said that, since I agree it would be irrelevant. Where I pointed out it doesn't matter what kid grow up to be; I was pointing out that future actions are irrelevant.
A. I'm not supposing that. What I meant was that you are not the one who denied him the choice. Again mea culpa as far as misunderstanding is concerned.
In organ problem, you are taking away a patient's right to choose away from situation.
B. Here, I was changing trolley situation to find a equivalent for doctor problem.
In the new trolley problem that I assuned, there was no switch. The only way to save the 5 persons was to throw a person standing next to you (presumed to be fat) and thus block the trolley before reaches 5 people. It is a popular alternative scenerio. I thought it would be wrong to throw him.
Chris wrote: "Sidharth wrote: Nowhere is doctor given a right to kill a patient without his choice for whatevver reasons. It is not part of a doctor's duty ...
I think this confuses the issue. It's a fact abou..."
Agreed
I think this confuses the issue. It's a fact abou..."
Agreed

Ok let me ask this, Chris:
I
1/ Bentham's "felicific calculus" requires the quantification of intangible qualities - FACT.
2/ You said in an earlier post that you cannot perform this quantification. I certainly agree. If Bentham was ahead of his time, it is a great shame that he forgot telling the world how to quantify the components of his algo, or even come up with an algo himself. => Nobody can quantify these qualities. FACT.
3/ So it follows, that at the moment at least, felicific calculus is not a useful tool. Since nobody can quantify anything (this is not contentious - we all agree on this part), the tool cannot be used to decide the net impact of an action on the universal weal (net positive = ethical, net negative = non-ethical).
Question therefore: On what basis do you state that utilitarian analysis is useful (= applicable now, in practice) in deciding the moral fibre of an action?
II
Utilitarian methods work on the assumption that every action has a net positive/net negative impact on the common weal / total happiness (whatever the appropriate expression is).
But this can only be true, in principle, if the total "amount" of happiness in the world is not constant. If it was constant, it would follow that an incremental increase in happiness must equal an incremental increase in unhappiness, and the total therefore stays constant.
Question: On what basis would you be able to argue cogently that the amount of happiness / unhappiness is not constant, that we are not dealing with a closed system? It seems to me that every supporter of utilitarianism must have an answer to this question - otherwise the whole philosophy hangs in the air (it seems to me).

Heisenberg
Uncertainty Principle dictates that observing the experiment changes the variables. Therefore in gathering the data necessary for making the calculation you are altering the data. These alterations can be viewed as actions with moral weight.
Godel
Incompleteness Theorem tells us that we cannot know everything. Therefore you can only take the calculation so far before it disappears into the shadows.


I believe that to be the case, yes. If U (my abbreviation for the longer words) only works if happiness was notconstant, then I would think the Us need to show (to themselves, in the first place), that this premise holds. Otherwise the whole philosophy hangs in the air.
Non-Us do not really care - they hold that happiness is not calculable anyway, so - as Duffy pointed out - the whole idea of constant/non-constant is nonsense to the non-U. But it cannot be nonsense to the U.
And of course, I hear your thoughts on pet-euthanasia. They are, interestingly, far more fuzzy than a rigid application of "felicific calculus" would mandate. Of course they are. They must be! Happiness cannot be calculated, but it is still intuitive to judge an action in terms of whether it brings about more or less happiness.
But how to weigh this action will unfortunately always retain an element of subjectivity. Felicific calculus is an attempt to eliminate this subjectivity, but I think you would agree, Chris, that it cant be done (you are not calculating anything when you are performing the difficult tasks of pet-euthanasia, but you are weighing potential outcomes against a subjective benchmark)
I believe it may be dangerous to have too much (or any) faith in the calculability of intangible qualities (like happiness). An attempt to decide the trolley problem on a simple calculation of "5 times happiness > 1 time unhappiness" strikes me as deliberately simplistic.
"Simplistic" because it pretends that happiness is something that can be measured objectively by external parties and quantified. And "deliberately" because it feels like an attempt to escape the discomfort we feel when we realise that it is very hard - probably impossible - to find objective standards in ethics.

It makes no sense to those who realise this. But it must make sense to those who don't.
Chris wrote: "J: It isn’t enough to quote Heisenberg and Godel, you have to demonstrate that their principles apply to the application of the felicific calculus - something for which neither principle was devis..."
Now I don't know how to answer that but just out of curiosity, will you be among Ones who walk away from Omelas?
Now I don't know how to answer that but just out of curiosity, will you be among Ones who walk away from Omelas?
Chris wrote: "Sidharth: Probably, yes. And I'm delighted to find another LeGuin fan.
Some more personal stuff:
My brother-in-law died in May of a progressive lung disease. Towards the end he was bedridden, so..."
I'm sorry for your loss. I have seen a lot of my own family members suffering uselessly - both people and pets. I agree with whatever you have said.
The assumption that life is worth living no matter how much one suffers is often behind people refusing to do Euthnasia (or its equivalents) for others. Although it might be going way too far from observation, there is a scene from Bollywood movie 'Gujarish' (meaning wish) that I feel like sharing:
It is about an ex-magician who meets an accident and is now suffering paralysis from neck down for several years. Finally he requests an amendment in law to make Euthanasia legal, so that he could kill himself.
In one scene when he is asked if he wishes to say something before the verdict is given; he says he wishes to show a magic trick to the court. When it is allowed, his assistant brings in a box. The magician asks the lawyer of the state to volunteer, judge orders the lawyer to do so. The magician requests the lawyer to sit in the box and his assistant locks the door upon him. A few moments pass - as people expect magician to do something. He just sits calmly, till the lawyer starts screaming from inside the box. The magician starts talking about some random subject(weather) and thus further frightening the lawyer.
After a couple of minutes,the magician signals to assistant to let the lawyer out. "Are you stupid?" the lawyers says, breathing heavily, after coming out of box, "It was so dark inside, I couldn't see anything, I couldn't breathe.." The magician replies calmly, "It is what my life has been like for years. Two minutes and you wanted out."
Some more personal stuff:
My brother-in-law died in May of a progressive lung disease. Towards the end he was bedridden, so..."
I'm sorry for your loss. I have seen a lot of my own family members suffering uselessly - both people and pets. I agree with whatever you have said.
The assumption that life is worth living no matter how much one suffers is often behind people refusing to do Euthnasia (or its equivalents) for others. Although it might be going way too far from observation, there is a scene from Bollywood movie 'Gujarish' (meaning wish) that I feel like sharing:
It is about an ex-magician who meets an accident and is now suffering paralysis from neck down for several years. Finally he requests an amendment in law to make Euthanasia legal, so that he could kill himself.
In one scene when he is asked if he wishes to say something before the verdict is given; he says he wishes to show a magic trick to the court. When it is allowed, his assistant brings in a box. The magician asks the lawyer of the state to volunteer, judge orders the lawyer to do so. The magician requests the lawyer to sit in the box and his assistant locks the door upon him. A few moments pass - as people expect magician to do something. He just sits calmly, till the lawyer starts screaming from inside the box. The magician starts talking about some random subject(weather) and thus further frightening the lawyer.
After a couple of minutes,the magician signals to assistant to let the lawyer out. "Are you stupid?" the lawyers says, breathing heavily, after coming out of box, "It was so dark inside, I couldn't see anything, I couldn't breathe.." The magician replies calmly, "It is what my life has been like for years. Two minutes and you wanted out."

absolutely. Couldnt agree more with the spirit of your statement. We should indeed think about "the value of a being's life to that being". This is very well put, and I think the question. I was never happy with attempts to reduce ethics to pleasure, happiness, or some other sentiment - I think you put your finger on the essence here.
And I believe this is precisely why the trolley problem cannot be decided in a "simple" way. On what basis can an outsider decide that the value of one person's life is less "valuable" than the combined value of the lives of others?
The solution to the trolley problem - it seems to me - lies in the realisation that it offers an impossible choice. I cannot make a decision on the value of somebody's life on behalf of that person (without involving him/her in this decision).
So my solution to the trolley problem would be to say that (1) somebody needs to derail the trolley (2) I cannot interfere by choosing one life over five. In practice, I would presumably scramble to do everything possible to bring about (1) but I would not throw the switch. That is slightly different than saying "I would stand idly by". In my interpretation of the ethics involved, I would have done everything possible to save the individuals (but "everthing possible" does not include throwing the switch).
Whether it includes sacrificing myself, however, is another question. And unfortunately this question leads right back to U. Because, if one of the endangered people was my girlfriend, I would (like to think that I was capable to) throw myself in front of the trolley. But if the people involved were unknown to me, I would not.
On a personal note - I am very sorry to hear about your brother-in-law. What a terribly difficult choice to make. Many thanks for sharing that, Chris.

On my reading list now - I used to love Ursula LeGuin!

First we must differentiate between happiness and contentment. Both are positive emotional states characterized by a particular neurochemistry and expressed through smiling, laughing and the like. However I argue that happiness is an emotional state which can be induced. While contentment must be accomplished or achieved. For this reason I categorize contentment as a more valuable state than happiness.
If contentment is more valuable than happiness. And if the morallity of a decision is measured by the value of it's outcome. Then decisions which encourage contentment are more moral than decisions that encourage happiness.
Contentment is accomplished through action and achievement. Therefore a decision which allows individuals to struggle and possibly achieve is more likely to lead to contentment. This decision is a recognition of the moral agency of others.
If the recognition of moral agency is a higher more, then the violation of that same agency is a worse taboo.
In this way, isn't choosing to end the moral agency of the one worse than allowing the decisions of the five to come to their culmination?
Click here to read a philosophical essay exploring this dilemma.