Philosophy discussion

268 views
Thought Questions > The Trolley Problem

Comments Showing 1-50 of 58 (58 new)    post a comment »
« previous 1

message 1: by Cliff (new)

Cliff Hays (cliffhays) | 12 comments In 1967 the English philosopher Philippa Foot proposed an ethical problem involving a runaway trolley which – if not interfered with – would barrel down the left fork of a track (where five people would be killed), but if interfered with, would go down the right fork (where one person would be killed). The problem falls on the bystander, who must decide whether or not to interfere.

Click here to read a philosophical essay exploring this dilemma.


message 2: by Mike (new)

Mike (mcg1) | 7 comments If you've done your due dilligence and found that there are only two solutions (five deaths vs. one death), your goal is to mitigate the damage. The possibility that you can create other associated damage by conducting some action can't scare you into inaction. Freezing at that possibility in every instance would make all action impossible.

The fear here is that you don't want to have others claim that you're either 1) "playing god" or 2) quantifying the value of a single life by saying it's worth less than the value of five lives.

1) If you always prioritize eliminating or mitigating damage or harm as a universal standard, then you're not playing god. You're just applying the same expected rules of behavior in all cases. Playing god would mean making exceptions for certain individuals.

2) Even if you pick saving the lives of the five over the life of the one, you're still not quantifying the value of a single life. You're simply saying that five > one, as long as you accept the premise that each life (as an object) has equal value, whether Hitler or scientist.


message 3: by Charles (new)

Charles Rouse | 2 comments I've never liked the examination of "lifeboat problems," in ethical discussions. This is one of those. How many times does this come up in real life? Has it ever? I suspect it's purely hypothetical and is constructed to look at certain kinds of ethical concerns.
In the world that we live in, the problems is not to teach people to make distinctions based on finely reasoned ethical principles, or that isn't the main problem. The problem is to convince people to actually apply rather simple and well understood ethical principles, the Ten Commandments, if you will.
So if people behave honestly in business and treat people decently and don't steal or commit acts of terrorism, then hey, we're better off than we are now.
Aren't we?


message 4: by Feliks (last edited Nov 06, 2014 09:31PM) (new)

Feliks (dzerzhinsky) | 159 comments Is this an offshoot of the 'million people in China' problem?

Goes something like this: choose! A million people in China could die, but your loved ones here in America would be saved. Or, give the word that saves the 1m people, and your family will be forfeit instead. It is your decision. Do you let the 1m strangers die so that your family can live? After all, what's another million Chinese? They're going to die anyway in a flood or something.

There's also the 'lady and the tiger' dilemma.

The above poster is correct. These problems are fruitless. No victory is possible. No path leads to safety. The only way out is to refuse to participate in the bargain at all. Deny all the terms offered you. Dissolve the setting of the problem. Allow it to implode.


message 5: by Mark (new)

Mark Hebwood (mark_hebwood) | 133 comments When I was at business school, one of the many group exercises we had to participate in presented yet another version of this dilemma. You are in a cave, as one person of group of people. The group includes a young boy, an old woman, a Nobel prize decorated scientist, a doctor, and a stockbroker. You are all in a spot of bother, as you are trapped and only one person can be saved. You need to decide which one you will choose to climb to safety, while the rest of you will perish.

Discussions focussed on the relative uses of individuals to society and people chose in accordance with their personal interpretation. I remember being the odd one out for suggesting, much to a unified chorus of indignation, that the only "fair" method would be to draw lots.

I still believe this, as I regard all possible criteria that can in theory be consulted to solve this dilemma as "pseudo-criteria" defined by each person's individual interpretation. There are no objective criteria that help us decide things like this. Either outcome in these case studies is a tragedy, and hence specific action can only be directed by reference to an arbitrary decision process.

let's just make sure the dice aren't loaded...


message 6: by Feliks (last edited Nov 13, 2014 11:27AM) (new)

Feliks (dzerzhinsky) | 159 comments I like the straws method except --(and this is probably what drew fire) --except for the inclusion of the kid. Sure, I hate kids but I think its not cool to remove someone's chance to live. Doctors, stockbrokers, scientists, and the elderly (in that cave group) all had a chance to lead full, reasonably long lives. Not the brat, though. Preserving the rights of the young is still a hallmark of a noble society. I'd vote to save him. Even despite the subjectivity you call out in all such proceedings--it simply 'feels righter' than any other way.

They used straws in that classic B-movie, 'Five Came Back', FYI.


message 7: by Brad (new)

Brad Lyerla Foot's hypothetical is easy for most of us. Mitigate the dying. Works for the million Chinese problem too.

But this becomes very hard in real life because of the uncertainty about outcomes. First, we never have certainty that it's five lives vs. one. Plus, we don't know what will be lost by the death of the one vs the five- which is really what matters if we are trying to mitigate.

Imagine one of the five is the next Adolph Hitler. What then?

Or the one is the next Jonas Salk?

In real life, such considerations would be critical to a decision that optimizes mitigation for the larger group that we are seeking to protect. In real life, we would not limit the calculus to the six individuals whose lives hang in the balance, but we would consider the welfare of the community. It is the long term welfare of the community that drives us to mitigate in the first place.


message 8: by Feliks (last edited Nov 13, 2014 12:04PM) (new)

Feliks (dzerzhinsky) | 159 comments But five people in a cave can't be concerned whether the child among them is potentially another Hitler or Stalin. He might just as well grow up to be an American SEAL who assassinates another potential Hitler or potential Stalin, thus preventing another genocide.

So they can only decide based on what he actually is: a child.

Your other remarks: good.


message 9: by Brad (new)

Brad Lyerla I don't think the child is a special case. I think the child illustrates the problem of uncertainty that I was thinking about. The child makes mitigation very uncertain.


message 10: by thewanderingjew (new)

thewanderingjew Aside from the old woman, whose achievement is not provided, all except the young boy have lived to achieve their goals. The young boy has the most to offer and the most to gain from the opportunity to live. Of course, he might grow up to be a Jeffrey Dahmer, but he is the only unknown quantity that needs to live to be identified.


message 11: by Feliks (last edited Nov 13, 2014 07:41PM) (new)

Feliks (dzerzhinsky) | 159 comments Yep. Agreed. Whittling down the question to 'what is the largest fairness' is likely not going to please everyone but ya still gotta do it. Some metrics can't be swept aside.


message 12: by Duffy (new)

Duffy Pratt | 148 comments Wouldn't most people pick themselves?


message 13: by Mark (new)

Mark Hebwood (mark_hebwood) | 133 comments Feliks wrote: "I like the straws method except --(and this is probably what drew fire) --except for the inclusion of the kid. Sure, I hate kids but I think its not cool to remove someone's chance to live. Doctors..."

ha ha ha "I hate kids"... :-) I totally agree with your sentiment but of course the evil professors at B-school put that one into the mix deliberately, to encourage a natural emotive response. Even though it feels "righter" (and it does), it's still strictly speaking bogus. If the kid goes on to live a good life, cool. But what if she allows to radicalise herself and throws a bomb into a department store. Then the same standards that made us pick the kid in the first place condemn our choice as wrong. Bummer.


message 14: by Mark (new)

Mark Hebwood (mark_hebwood) | 133 comments Feliks wrote: "Yep. Agreed. Whittling down the question to 'what is the largest fairness' is likely not going to please everyone but ya still gotta do it. Some metrics can't be swept aside."

But you cant do that, Feliks. There are no criteria that you can objectively measure. What if the kid is A. Hitler? What if he's not? What if the old woman is Mother Theresa? What if she has a brilliant mind and will, if saved, find a cure for cancer? What if the stockbroker is the most astonishing altruist in the world and will, if he lives, channel his $200m bonus into good causes? What if the doctor is a dentist - that is bad, just to be clear :-)


message 15: by Duffy (new)

Duffy Pratt | 148 comments The ethical problem is this, at a minimum. If you do nothing, then you have done no wrong. If you interfere, then you have murdered one person. Is it ok to murder someone in order to save the lives of five. You saw the problem when it was rephrased as the doctors killing one person in need of tests, to harvest his organs to save five people in need of organ transplants. It's the same problem, almost.


message 16: by Feliks (last edited Sep 22, 2015 11:15PM) (new)

Feliks (dzerzhinsky) | 159 comments I like that you got a chuckle out of my grouchiness :)

To answer your objection:
Mark wrote: "But you cant do that, Feliks. There are no criteria that you can objectively measure. What if the kid is A. Hitler? What if he's not?..."

That is why you *have to* do as I suggested. You're correct that you can't measure the magnitude of good or evil in a closed set. Just so. Therefore you have to dismiss it as conjecture--and remain white-knuckled to exactly the most pragmatic route.

We ought not let worry about 'the ends' skew what we know are the best 'means to an end' we typically follow. Like Heinlein said: 'Never let your morals prevent you from doing what's right'

p.s. if it comforts you any, Karl Marx was dead-set against 'good of the majority' scenarios like this even though his philosophy countenanced it in essentially all other respects. He objected to *any* choice being made for others, by the people at the top of the class structure. Who are they to decide? he wanted to know


message 17: by Mark (new)

Mark Hebwood (mark_hebwood) | 133 comments Chris wrote: "If you interfere, you save five lives at the expense of one. If you don't, you save one life at the expense of five. The first is better than the second (other things being equal), so you should in..."

Chris,
in another thread we discussed utilitarianism. You said you need to consider all possible consequences of an action. Of course it is impossible to do that but let us just consider one possible scenario here: The 5 dudes you save are all mass-murderers. The 1 dude sacrificed is just some normal guy. Clearly, in this scenario you'd lower the total "goodness" or "happiness" (or whatever you wish to call this elusive quality) in the world. And if at least one scenario exists that would bring this about, it is no longer clear whether selective killing enhances the common weal...


message 18: by Duffy (new)

Duffy Pratt | 148 comments If you do nothing, then you have not committed an immoral act. You will have done wrong only if you were under a duty to act. That is how the law views it. There is a difference between action and inaction.


message 19: by Duffy (new)

Duffy Pratt | 148 comments So you are saying that a doctor should kill a person who has come to see him for tests, if it means that he can save five people in need of organ transplants?

Without the action/inaction distinction, you are committing a wrong by keeping any of your disposable income, or spending anything on amusements. Surely you could give that money to a cause which would help feed starving people. If fact, the standard you are holding up means that almost everyone is committing horribly immoral inactions all of the time.


message 20: by J. (new)

J. Gowin | 122 comments Duffy,

I disagree with the opinion that inaction is morally neutral. Let's imagine that you are walking down the street, and that I see you walking. As I watch a figure steps out behind you and starts to level a gun at you. I could save your life by yelling.

In that situation, am I not morally compelled to act?


message 21: by J. (new)

J. Gowin | 122 comments The cave, while similar to the trolley, differs in that it could be equitably resolved through triage. There is a limited resource, which would be most effectively spent on the individual with the highest chance of survival.

As for the trolley, attempting to "mitigate" assumes that you can fairly weigh dissimilar values against each other. It seems to me that the only fair valuation is no valuation. That would make the value of one life zero. It therefore does not matter how many people are going to die, you are simply wrong either way.


message 22: by Duffy (new)

Duffy Pratt | 148 comments You are misstating your L-case. If you do not throw the switch, you will not have infringed on the L-right of the people on the track. They have a perfect right to live out their lives until the trolley squashes them. Similarly, for the people who need organ transplants, their illness is tantamount to an oncoming Trolley. As a doctor, you could choose to redirect that Trolley by killing the one person and saving the five.

I agree that there might be systems that show an ethical difference between the two situations, but I don't think the system you have drawn hits the mark.


message 23: by Duffy (new)

Duffy Pratt | 148 comments If, as you conclude in your Trolley problem, you judge how wrong something is by its results, so that inaction which has the result of a person dying is just as bad as the murder of that person, then I don't thin my choice of words was too extreme.


message 24: by Duffy (new)

Duffy Pratt | 148 comments I have given very little in the way of a positive account. As for action vs inaction, I didn't say that inaction was morally neutral. Rather, I said that it can be, and typically is, distinguished from action. In your hypothetical, perhaps you have a duty to yell. If you fail that duty, I might hold you culpable. But I don't think your failure to yell stands on the same ground as the guy pulling the trigger, even though both things led to the same consequence.


message 25: by Duffy (new)

Duffy Pratt | 148 comments Well, they are both hypotheticals, so the way to be sure is by changing the parameters of the hypothetical. In the Trolley problem, we don't weigh the chance that the train will derail after hitting the first of the five people,. We just write it out of the hypothetical. And yes, the simplicity of the trolley problem is that it offers no information except that you have a choice of murdering one person, or letting five people die.

Let me ask two slightly different questions. Suppose instead of five people on the track, there was only one. Do you think in that case that it would be OK to flip a coin? Second question, if the choice were to let the five people die or throw yourself in front of the train to derail it and save the five, do you think it would be immoral not to throw yourself in front?

And one final question, suppose the five were parents and grandparents of the one. Would the wishes of any of the six of them matter, since the outcomes would still be the same?


message 26: by Mark (last edited Sep 25, 2015 04:58AM) (new)

Mark Hebwood (mark_hebwood) | 133 comments Chris wrote:If it was just one parent or grandparent, it would be fairly easy to work out the felicific calculus...

Chris, would you be able to do that for us? Let us assume there is one parent involved, and that's it. I must admit, I would love to see felicific calculus in action.

P.S. - Jeremy Bentham of course invented this concept, and you have used it consistently in your contributions as if it was an objective tool that you are able to employ. Jeremy said he would derive an answer from his algorithm that will denote an action's "good tendency" or "evil tendency". His explanation implies that "good" or "evil" will become apparent by observing the signage of the number resulting from the calculation (positive for good, negative for evil) and it is clear from his instructions that he believed it is possible to assign exact values to the components of his algorithm.

You said it was "fairly easy" to work out the felicific calculus. Would you demonstrate this? And could you share how you got to the number that is the result of your calculations? I have long wondered how to apply this form of calculus, and have never seen an example of it.


message 27: by Mark (new)

Mark Hebwood (mark_hebwood) | 133 comments Now, I am unsure how this would work... If you are constructing a hedonistic table of this nature, it could not be applied to a specific situation.

One reason is that a specific action will cause specific pleasure/pain, rather than a hedonistic response that conforms to a distribution of all possible actions that can act on a person.

The other reason is similar. You cannot apply statistical averages to specific individuals. For example, an actuarial table might tell me that the probability of death for a male aged 50 before his 51st birthday is 0.47888%. But this probability will not apply to Peter, who is 50. Peter's death is binary. It either happens or it doesn't. Statistics is a science of large numbers (large enough to give you a sufficiently homogeneous sample).

So I applaud your creativity, and your desire to cut through the elusiveness of ethics with quantitative methods, but I am afraid this is not possible, Chris. And it is not possible in principle, not because we haven't got enough data or cannot gather them. Bentham would have agreed with me, I think. There is a reason that he never even wrote down his algorithm. And it is because he knew full well he had no leg to stand on. If you haven't read the passage yet where he talks about this, do check it out - you will find it most wonderfully vague... :-)


message 28: by J. (last edited Sep 25, 2015 06:18PM) (new)

J. Gowin | 122 comments At this moment Jeremy Bentham is sitting in a closet with his head between his feet, and I'm OK with that.

Sadly, without omniscience utilitarianism is little more than an exercise in procrastination. I can demonstrate this statement simply by going back to the basic trolley problem. Do nothing five people die, throw the switch and one person dies. You only know the counts. You have one second to decide...

If you asked a question five people are dead.


message 29: by J. (new)

J. Gowin | 122 comments "Unless, of course, average net happiness is negative, in which case you shouldn't throw the switch."

And that is the problem with utilitarianism, it is morality in retrospect. To make a moral decision you must predict the outcome of that decision. However, the universe will always side with the Law of Unintended Consequences, so you are constantly evaluating past decisions against present information, without clear knowledge of future information. Today it was moral. Tomorrow it will be immoral. Next week....


message 30: by Duffy (new)

Duffy Pratt | 148 comments Of course, in the time allotted, you also have to get as much information as you can about the consequences. But that puts you at a second order utilitarian calculus. Should you gather the information you need about the consequences, or will some other course of action be more likely to promote maximum happiness. Time to do another equation, to see whether I should proceed with the first calculus, and so on...


message 31: by Mark (new)

Mark Hebwood (mark_hebwood) | 133 comments Chris wrote: "But in fact we do apply statistical averages to specific individuals. That happens whenever anyone applies for life assurance."

No Chris. It does not. The life office does not only sell one policy. It sells a batch of policies which is large enough for actual mortality experience in the life fund to match the statistical assumptions in the pricing basis.

On a lighthearted note - I am amazed that we were able to move from ethics to actuarial principles in this thread. That must be a world first!


message 32: by Mark (new)

Mark Hebwood (mark_hebwood) | 133 comments J. wrote: "At this moment Jeremy Bentham is sitting in a closet with his head between his feet, and I'm OK with that.

Sadly, without omniscience utilitarianism is little more than an exercise in procrastinat..."


Agreed on Bentham. He makes a lot more sense now that he's in the box. But they replaced his actual head with a replica. You can still see his head, but you need to be granted special permission by the bursar of UCL.


message 33: by Mark (new)

Mark Hebwood (mark_hebwood) | 133 comments Duffy wrote: "Of course, in the time allotted, you also have to get as much information as you can about the consequences. But that puts you at a second order utilitarian calculus. Should you gather the inform..."

Just so. Even Bentham appreciated this issue. So he coined the terms fecundity and purity, which sort of capture these n-th order effects. I must admit I am astounded that people can talk about "felicific calculus" with a straight face.

But it is certainly amusing to think about what Jeremy's algo might have looked like, had he been bold enough to write it down. There is also proprinquity, the delay to the onset of pleasure. I wonder how "pleasure" or "happiness" responds to the passage of time. Can we model it using thermodynamics (heat dispersion)? Is there a time value to happiness - in other words, is there happiness decay?

And of course, is happiness constant in the universe? If it is a zero-sum gain, collective happiness can only increase at the local level (like entropy only decreases at the local level)? That would be a bummer for utilitarianism.


message 34: by Mark (new)

Mark Hebwood (mark_hebwood) | 133 comments Ha ha yes excellent - "research objectives". I think the head was indeed the target of frequent student pranks. I have always wondered what they were.


message 35: by Mark (new)

Mark Hebwood (mark_hebwood) | 133 comments This is true. I'd rather not, though.

I was, of course, entirely tongue-in-cheek. My incredulity is stirred by the fact that Bentham apparently genuinely believed that happiness is a calculable quantity. My term "happiness decay" is a pun on "time decay" used in option pricing, and my quip on heat dispersion plays on the similarity between dynamics in financial options and heat transfer.

Apologies, I am a lighthearted guy... All I meant to highlight is that surely we can agree that happiness (or ethics) are not tangible quantities that lend themselves to the application of quantitative methods? As an illustrative concept, utilitarianism has value, it seems to me, but attempting to calculate values and to model the "behaviour" of these concepts very much feels like re-opening a discussion of how many fairies can dance on the tip of a pin. It seems to me, at least...


message 36: by J. (new)

J. Gowin | 122 comments Mark wrote: "J. wrote: "At this moment Jeremy Bentham is sitting in a closet with his head between his feet, and I'm OK with that.

Sadly, without omniscience utilitarianism is little more than an exercise in p..."


Yep, I know. Last I heard his original head is kept on a bowl, between his feet. That way you don't see it through the window. "Present, but not voting." I wonder how they decide who has to sit next to him at the meetings.


message 37: by J. (new)

J. Gowin | 122 comments The Law of Unintended Consequences won't prevent you from completing your well thought out statagems. It simply states that there will be consequences for which you did not make provisions in your plans. Little things like:

Preventing Japanese forces from utilizing the oil resources of French Indochina by training the locals in guerilla warfare, and then the Vietnam War.

Or

Providing weapons to Afghanis to fight the pinko hordes, and then...well were still living through that.

The LoUC is simply a grand statement of the obvious. "You aren't as smart as you think you are." Isn't that also the big lesson of history?


message 38: by [deleted user] (new)

IMO saving five people is obvious choice in trolley problem - as yes, there can be murder by inaction.

It is different from organ problem in that you are not forcing the single victim into situation, the six persons on tracks are already there and you are just to guide (action or inaction) the trolley to one of tracks. You won't force the trolley victim to die, he may still run away but in organ problem you will be forcing him.

The 'organ problem' equivalent for trolley problem would be where the sixth person was standing outside, near you and your option was to throw him on the track. Throwing an innocent into a situation is not okay but where innocents are already in there and you are forced to choose, you sure should choose lesser of evils.

In organ problem sixth person is not one of original targets but in trolley problem he is one of two targets by fact of being there on tracks.

The pilots of planes about to crash can turn to villages but this doesn't mean they are killing villagers just that they are turning the unavoidable destruction to less crowded area. Trolley problem is same. The sixth person just happened to be in that place. In organ problem, you are actually 'creating' the village.

Also, in trolley problem the action of saving 5 is all you do; death of 6th is side effect that succeeds it. While murder of healthy person in organ problem proceeds the saving of souls.

In cave problem, I will save myself :-) but If I wasn't in there and option was among others, I would choose the kid. Not that we have moral right to chose between lives but we have to do or we can't save any.

The kid is after all a kid, we (as part of society) owe it its safety - even if it grows to be a Hitler. Innocent till proven guilty. Innocent till crime is committed. Choosing say the doctor may pay more but here we (again as society) are first to do our duty rather than start looking for gains; unless of course the woman is beautiful in which case .....

So even in trolley problem, I wouldn't change tracks if the 6th person was kid while others were grown ups.


message 39: by [deleted user] (new)

It sounded much clear in my mind but probably my comment was a little too messy to be ubderstood:

1. Try looking at it in this way: when someone is in a mine and is standing on tracks, he is running a potentional risk of being run over by trolley anyway. Such accidents may occur anyway.
And the person on the button is only doing his duty in switching the trolley to a less deadly track. It might be a part of his job to take care as to where trolley is headed.

On the other hand, I don't think any of us pretend to be running a risk of being stabbed by doctor when we go to visit him for tests. Nowhere is doctor given a right to kill a patient without his choice for whatevver reasons. It is not a part of doctor's duty to see to that one of his patient is poisioned or stabbed ... or whatever other form he might be using.

2. I dont think I ever said that, since I agree it would be irrelevant. Where I pointed out it doesn't matter what kid grow up to be; I was pointing out that future actions are irrelevant.

A. I'm not supposing that. What I meant was that you are not the one who denied him the choice. Again mea culpa as far as misunderstanding is concerned.

In organ problem, you are taking away a patient's right to choose away from situation.

B. Here, I was changing trolley situation to find a equivalent for doctor problem.
In the new trolley problem that I assuned, there was no switch. The only way to save the 5 persons was to throw a person standing next to you (presumed to be fat) and thus block the trolley before reaches 5 people. It is a popular alternative scenerio. I thought it would be wrong to throw him.


message 40: by [deleted user] (new)

Chris wrote: "Sidharth wrote: Nowhere is doctor given a right to kill a patient without his choice for whatevver reasons. It is not part of a doctor's duty ...

I think this confuses the issue. It's a fact abou..."


Agreed


message 41: by Mark (new)

Mark Hebwood (mark_hebwood) | 133 comments Chris wrote: "Well, I think Bentham was simply ahead of his time. And at least I have now been disabused of the idea that the felicific calculus is a method for working out the number of kittens a cat can have."

Ok let me ask this, Chris:

I
1/ Bentham's "felicific calculus" requires the quantification of intangible qualities - FACT.

2/ You said in an earlier post that you cannot perform this quantification. I certainly agree. If Bentham was ahead of his time, it is a great shame that he forgot telling the world how to quantify the components of his algo, or even come up with an algo himself. => Nobody can quantify these qualities. FACT.

3/ So it follows, that at the moment at least, felicific calculus is not a useful tool. Since nobody can quantify anything (this is not contentious - we all agree on this part), the tool cannot be used to decide the net impact of an action on the universal weal (net positive = ethical, net negative = non-ethical).

Question therefore: On what basis do you state that utilitarian analysis is useful (= applicable now, in practice) in deciding the moral fibre of an action?

II
Utilitarian methods work on the assumption that every action has a net positive/net negative impact on the common weal / total happiness (whatever the appropriate expression is).

But this can only be true, in principle, if the total "amount" of happiness in the world is not constant. If it was constant, it would follow that an incremental increase in happiness must equal an incremental increase in unhappiness, and the total therefore stays constant.

Question: On what basis would you be able to argue cogently that the amount of happiness / unhappiness is not constant, that we are not dealing with a closed system? It seems to me that every supporter of utilitarianism must have an answer to this question - otherwise the whole philosophy hangs in the air (it seems to me).


message 42: by J. (new)

J. Gowin | 122 comments In considering the nature of hedonistic calculations, it occured to me that two giants are standing in Bentham's way.

Heisenberg
Uncertainty Principle dictates that observing the experiment changes the variables. Therefore in gathering the data necessary for making the calculation you are altering the data. These alterations can be viewed as actions with moral weight.

Godel
Incompleteness Theorem tells us that we cannot know everything. Therefore you can only take the calculation so far before it disappears into the shadows.


message 43: by Duffy (new)

Duffy Pratt | 148 comments Since happiness can't be quantized in the first place, it makes no sense to talk about the sum total of happiness. There is no such sum, it is quite literally undefined.


message 44: by Mark (new)

Mark Hebwood (mark_hebwood) | 133 comments You seem to be saying that the onus is on the utilitarian to show that the amount of happiness in the system is not constant

I believe that to be the case, yes. If U (my abbreviation for the longer words) only works if happiness was notconstant, then I would think the Us need to show (to themselves, in the first place), that this premise holds. Otherwise the whole philosophy hangs in the air.

Non-Us do not really care - they hold that happiness is not calculable anyway, so - as Duffy pointed out - the whole idea of constant/non-constant is nonsense to the non-U. But it cannot be nonsense to the U.

And of course, I hear your thoughts on pet-euthanasia. They are, interestingly, far more fuzzy than a rigid application of "felicific calculus" would mandate. Of course they are. They must be! Happiness cannot be calculated, but it is still intuitive to judge an action in terms of whether it brings about more or less happiness.

But how to weigh this action will unfortunately always retain an element of subjectivity. Felicific calculus is an attempt to eliminate this subjectivity, but I think you would agree, Chris, that it cant be done (you are not calculating anything when you are performing the difficult tasks of pet-euthanasia, but you are weighing potential outcomes against a subjective benchmark)

I believe it may be dangerous to have too much (or any) faith in the calculability of intangible qualities (like happiness). An attempt to decide the trolley problem on a simple calculation of "5 times happiness > 1 time unhappiness" strikes me as deliberately simplistic.

"Simplistic" because it pretends that happiness is something that can be measured objectively by external parties and quantified. And "deliberately" because it feels like an attempt to escape the discomfort we feel when we realise that it is very hard - probably impossible - to find objective standards in ethics.


message 45: by Mark (new)

Mark Hebwood (mark_hebwood) | 133 comments Duffy wrote: "Since happiness can't be quantized in the first place, it makes no sense to talk about the sum total of happiness. There is no such sum, it is quite literally undefined."

It makes no sense to those who realise this. But it must make sense to those who don't.


message 46: by [deleted user] (new)

Chris wrote: "J: It isn’t enough to quote Heisenberg and Godel, you have to demonstrate that their principles apply to the application of the felicific calculus - something for which neither principle was devis..."

Now I don't know how to answer that but just out of curiosity, will you be among Ones who walk away from Omelas?


message 47: by [deleted user] (new)

Chris wrote: "Sidharth: Probably, yes. And I'm delighted to find another LeGuin fan.

Some more personal stuff:

My brother-in-law died in May of a progressive lung disease. Towards the end he was bedridden, so..."


I'm sorry for your loss. I have seen a lot of my own family members suffering uselessly - both people and pets. I agree with whatever you have said.

The assumption that life is worth living no matter how much one suffers is often behind people refusing to do Euthnasia (or its equivalents) for others. Although it might be going way too far from observation, there is a scene from Bollywood movie 'Gujarish' (meaning wish) that I feel like sharing:

It is about an ex-magician who meets an accident and is now suffering paralysis from neck down for several years. Finally he requests an amendment in law to make Euthanasia legal, so that he could kill himself.

In one scene when he is asked if he wishes to say something before the verdict is given; he says he wishes to show a magic trick to the court. When it is allowed, his assistant brings in a box. The magician asks the lawyer of the state to volunteer, judge orders the lawyer to do so. The magician requests the lawyer to sit in the box and his assistant locks the door upon him. A few moments pass - as people expect magician to do something. He just sits calmly, till the lawyer starts screaming from inside the box. The magician starts talking about some random subject(weather) and thus further frightening the lawyer.

After a couple of minutes,the magician signals to assistant to let the lawyer out. "Are you stupid?" the lawyers says, breathing heavily, after coming out of box, "It was so dark inside, I couldn't see anything, I couldn't breathe.." The magician replies calmly, "It is what my life has been like for years. Two minutes and you wanted out."


message 48: by Mark (new)

Mark Hebwood (mark_hebwood) | 133 comments Chris,

absolutely. Couldnt agree more with the spirit of your statement. We should indeed think about "the value of a being's life to that being". This is very well put, and I think the question. I was never happy with attempts to reduce ethics to pleasure, happiness, or some other sentiment - I think you put your finger on the essence here.

And I believe this is precisely why the trolley problem cannot be decided in a "simple" way. On what basis can an outsider decide that the value of one person's life is less "valuable" than the combined value of the lives of others?

The solution to the trolley problem - it seems to me - lies in the realisation that it offers an impossible choice. I cannot make a decision on the value of somebody's life on behalf of that person (without involving him/her in this decision).

So my solution to the trolley problem would be to say that (1) somebody needs to derail the trolley (2) I cannot interfere by choosing one life over five. In practice, I would presumably scramble to do everything possible to bring about (1) but I would not throw the switch. That is slightly different than saying "I would stand idly by". In my interpretation of the ethics involved, I would have done everything possible to save the individuals (but "everthing possible" does not include throwing the switch).

Whether it includes sacrificing myself, however, is another question. And unfortunately this question leads right back to U. Because, if one of the endangered people was my girlfriend, I would (like to think that I was capable to) throw myself in front of the trolley. But if the people involved were unknown to me, I would not.

On a personal note - I am very sorry to hear about your brother-in-law. What a terribly difficult choice to make. Many thanks for sharing that, Chris.


message 49: by Mark (new)

Mark Hebwood (mark_hebwood) | 133 comments Sidharth wrote: "Chris wrote: "J: It isn’t enough to quote Heisenberg and Godel, you have to demonstrate that their principles apply to the application of the felicific calculus - something for which neither princ..."

On my reading list now - I used to love Ursula LeGuin!


message 50: by J. (new)

J. Gowin | 122 comments In measuring life as potetial happiness, there is a thread worth tracing.

First we must differentiate between happiness and contentment. Both are positive emotional states characterized by a particular neurochemistry and expressed through smiling, laughing and the like. However I argue that happiness is an emotional state which can be induced. While contentment must be accomplished or achieved. For this reason I categorize contentment as a more valuable state than happiness.

If contentment is more valuable than happiness. And if the morallity of a decision is measured by the value of it's outcome. Then decisions which encourage contentment are more moral than decisions that encourage happiness.

Contentment is accomplished through action and achievement. Therefore a decision which allows individuals to struggle and possibly achieve is more likely to lead to contentment. This decision is a recognition of the moral agency of others.

If the recognition of moral agency is a higher more, then the violation of that same agency is a worse taboo.

In this way, isn't choosing to end the moral agency of the one worse than allowing the decisions of the five to come to their culmination?


« previous 1
back to top