Skip to content

Why I Am Not a Utilitarian

Utilitarianism is a widely despised, denigrated and misunderstood moral theory.

Kant himself described it as a morality fit only for English shopkeepers. (Kant had much loftier aspirations of entering his own “noumenal” world.)

The adjective “utilitarian” now has negative connotations like “Machiavellian”. It is associated with “the end justifies the means” or using people as a mere means or failing to respect human dignity, etc.

For example, consider the following negative uses of “utilitarian.”

“Don’t be so utilitarian.”

“That is a really utilitarian way to think about it.”

To say someone is behaving in a utilitarian manner is to say something derogatory about their behaviour.

When Jeremy Bentham introduced utilitarianism in the 1700s, it was a radical, revisionary and welcome new moral theory. Its core was human equality: each is to count for one and none for more than one. Until that point, princes counted for more than paupers. But utilitarians such as Bentham argued that every person’s well-being and life counted equally. The right act is the act which maximises well-being, impartially considered. The basic idea of utilitarianism is straightforward – the common currency of ethics is human well-being. What matters to each of us is how our lives go. Morality is about treating everyone equally, that is, considering their well-being equally.

Utilitarianism had its heyday until about 50 years ago when it started to be pushed aside for neoKantian, feminist and virtue theories. There has been a resurgence of interest in the last decade following the pioneering work of Joshua Greene which was used to suggest that utilitarians made moral decisions in a more rational deliberative manner.

To test whether people are utilitarians or not, Greene used an old dilemma first described by Philippa Foot called the “trolley dilemma”. This has become a cottage industry of its own (see Dave Edmond’s recent book “Would You Kill the Fat Man”). One of Greene’s (and other recent researchers’) prime tests of whether you are a utilitarian is whether you think it is right to push a fat man in front of a trolley to stop it and save 5 workers’ lives who are further down the track.

In a paper just out yesterday, Guy Kahane, Jim Everett, Brian Earp, Miguel Farias and I present data that suggest that this decision alone needn’t really reflect a utilitarian psychology, but rather can reflect psychopathic and egoist tendencies. People have reported such an association with psychopathy before. We are thus adding to an existing literature, and although the correlation is fairly strong and significant, of course not everyone saying you should push the fat man will be higher on psychopathy – that is just one factor.

Conversely, and more importantly, we found that people who tended to think that the fat man should be pushed to his death to save five did not, in more familiar contexts, show any great altruist concern for the greater good of all, or more willing to make sacrifices to prevent great harm to distant others. Here is a quote from the discussion from an earlier draft:

“A great deal of recent research has focused on hypothetical moral dilemmas in which one person needs to be sacrificed in order to save the lives of a greater number. It is widely assumed that these far-fetched sacrificial scenarios can shed new light on the fundamental opposition between utilitarian and non-utilitarian approaches to ethics (Greene et al. 2004; Greene, 2008; Singer, 2005).

However, such sacrificial dilemmas are merely one context in which utilitarian considerations happen to conflict with opposing moral views (Kahane & Shackel, 2011). To the extent that ‘utilitarian’ judgments in sacrificial dilemmas express concern for the greater good—that is, the utilitarian aim of impartially maximizing aggregate welfare—then we would expect such judgments to be associated with judgments and attitudes that clearly express such concern in other moral contexts.

The set of studies presented here directly tested this prediction by investigating the relationship between so-called ‘utilitarian’ judgments in classical sacrificial dilemmas and a genuine, impartial concern for the greater good. Across four experiments employing a wide range of measures and investigations of attitudes, behavior and moral judgments, we repeatedly found that this prediction was not borne out: a tendency to endorse the violent sacrifice of one person in other to save a greater number was not (or even negatively) associated with paradigmatic markers of utilitarian concern for the greater good. These included identification with humanity as a whole; donation to charities that help people in need in other countries; judgments about our moral obligations to help children in need in developing countries, and to prevent animal suffering and harm to future generations; and an impartial approach to morality that does not privilege the interests of oneself, one’s family, or one’s country over the greater good. This lack of association remained even when the utilitarian justification for such views was made explicit and unequivocal. By contrast, many (though not all) of these markers of concern for the greater good were inter-correlated.

In fact, responses designated as ‘utilitarian’ in the current literature were strongly associated with traits, attitudes and moral judgments (primary psychopathy, rational egoism, and a lenient attitude toward clear moral transgressions) that are diametrically opposed to the impartial concern for the greater good that is at the heart of utilitarian ethics”

As we argue, Utilitarianism is a comprehensive moral doctrine with wide ranging impact. In fact it is very demanding. Few people if any have ever been anything like a perfect utilitarian. It would require donating one of your kidneys to a perfect stranger. It would require sacrificing your life, family and sleep to the level that enabled you to maximise the well-being of others. Because you could improve the lives of so many, so much, utilitarianism requires enormous sacrifices. People have donated large parts of their wealth and even a kidney, but this still does not approach the sacrifice required by Utilitarianism.

For these reasons, one criticism of utilitarianism is that it is too demanding.

Bernard Williams, a famous critic of Utilitarianism, once infuriated Dick Hare, a modern father of Utilitarianism, in a TV interview by asking him,

“If a plane had crashed and you could only rescue your own child or two other people’s children, which would you rescue?”

Utilitarians should rescue the two strangers rather than their own child.

People think I am a utilitarian but I am not. I, like nearly everyone else, find Utilitarianism to be too demanding.

I try to live my life according to “easy rescue consequentialism” – you should perform those acts which are at small cost to you and which benefit others greatly. Peter Singer, the greatest modern utilitarian, in fact appeals to this principle to capture people’s emotions – his most famous example is that of a small child drowning in a pond. You could save the child’s life by just getting your shoes wet. He argues morality requires that you rescue the child. But this is merely an easy rescue. Utilitarianism requires that you sacrifice your life to provide organs to save 7 or 8 lives.

Easy rescue consequentialism is, by contrast, a relaxed but useful moral doctrine.

I was discussing the Trolley type dilemmas with my wife. She said that the right thing to do was to throw yourself in front of the trolley to save the 5 people.

That is clearly what utilitarians would do, but not psychopaths or egoists.

What about ordinary people? They had a range of utilitarian tendencies that often came apart. For example, our study did use one dilemma that involved self-sacrifice (it’s reported in the supp. materials.). The majority of ordinary people they should sacrifice themselves (whether or not they actually would) but think it’s wrong to push the fat man.

When my wife suggested the right answer to the Trolley dilemma was to sacrifice yourself, I objected that this was too demanding. You should experience great but temporary pain to save the five, perhaps lose a finger but not your whole life to save the five. That would be a difficult, not easy rescue.

Her reply, which has shaken my moral world, was, “But surely the right thing to do is to sacrifice your life for the five others.”

After all, if morality is meant to be impartial, perhaps the right thing to do is to be utilitarians. It is just that we are too selfish and self-absorbed.

Indeed, if morality is impartial, both I and the folk have intuitions which are difficult to justify. I have argued that it is right to sacrifice the one to save the five, but easy rescue consequentialism suggests I should not sacrifice my life to save the five. If morality is impartial, it should follow that it is also wrong to sacrifice the one to save the five.

Likewise, the folk believe it is right to sacrifice their own life, but wrong to sacrifice the fat man. Again, these should be symmetrical if morality is impartial. Either it is right to sacrifice both yourself and the fat man, or it is wrong. Morality has no eye to who is involved in a moral dilemma.

Perhaps another great utilitarian philosopher, Henry Sidgwick, has a solution to this apparent dilemma. Sidgwick argued that there are two reasons for action that can conflict: Prudence and Morality. Prudence is about what is good for you (self-interest) and Morality is about what is good for everyone, impartially considered. Sidgwick argued that there was no clear way balance these against each other.

In my case, I appear to be giving greater weight to Prudence than to Morality. The folk, on the other hand, appear to give greater weight to Morality, though they may have a non-consequentialist view of morality.

At any rate I won’t be sacrificing my own life for the 5 on the track. But maybe I am just not as moral as I could be. As Peter Singer once said, it is not as if morality should be easy. Perhaps we very often fail to do what morality requires.

Maybe the reason I am not a utilitarian is that I am just not good enough.

Share on

39 Comment on this post

  1. Sounds like an interesting article, J! I’m sure to read it, but can you explain here how you guys deal with the fact that being a supporter of (act-) utilitarianism and displaying a “utilitarian psychology” (that is calculating and allowing the aim justify the means, simply put), and acting according to either utilitarianism or such a psychology, is not the same thing? I’m thinking particularly about the obvious fact, that it makes perfect sense to support AU, but not act according to it (and to recognise that, supposing that we ever know such things). I’m pretty sure that holds for most AU supporters, as for Kantians, Christian Natural Law morality fans, and so on. Another thing: anyone who supports AU and knows his/her stuff, is, of course, very wary of displaying or acting out of a U psychology (for the well-known reasons provided by Sidgwick, Hare and Tännsjö). These are subtleties that psychologists researching these things usually miss, but I know that you know your stuff, so how do you know how to interpret the data?

  2. Interesting – I take it your view has some structural similarities to Scheffler’s in The Rejection of Consequentialism. He rejects agent-relative restrictions (like Kant’s categorical prohibitions on lying, or the impermissibility of pushing the fat man), while allowing some agent-relative permissions (failing to maximize the good for personal/prudential reasons).

    But I wonder – is your rejection of utilitarianism really a *moral* position, like Scheffler’s, according to which one is not *morally* required to sacrifice oneself? Or is it more like Sidgwick – you could accept that utilitarianism is the correct moral theory and that you are morally required to sacrifice yourself, but admit that in real life you will (sometimes) act more in accordance with prudence/self-interest. This is an important distinction; it will affect your support for sacrificial policies, and suggests a clear line for moral improvement.

    To my mind, the latter Sidgwickian line is more appealing (at least compared to Easy Rescue Consequentialism). Otherwise, you end up with a rather perverse morality: you’re required to push the fat man (it’s easy!) but not required to jump yourself. That’s not technically an implication Scheffler’s view (which is general), but would be implied by easy rescue consequentialism.

    You could try to avoid this by making Easy Rescue Consequentialism impartial: you’re not required to do something that would impose a significant cost on *anyone*. This would lead to a very weak version of consequentialism, though – it would match our intuitions in easy pond-boy drowning cases, but provide almost no guidance in the hard cases where there are significant costs to some action.

  3. Across four experiments employing a wide range of measures and investigations of attitudes, behavior and moral judgments, we repeatedly found that this prediction was not borne out: a tendency to endorse the violent sacrifice of one person in other to save a greater number was not (or even negatively) associated with paradigmatic markers of utilitarian concern for the greater good. These included identification with humanity as a whole; donation to charities that help people in need in other countries; judgments about our moral obligations to help children in need in developing countries, and to prevent animal suffering and harm to future generations; and an impartial approach to morality that does not privilege the interests of oneself, one’s family, or one’s country over the greater good.

    You have a particular notion of “greater good” that is biased toward things that “seem” charitable but may actually have the opposite effects, like foreign aid, which has been shown to be largely wasteful or even counterproductive in many cases My conception of greater good involves preventing the use of money on wasteful “altruistic” status seeking and instead supporting technological advancements that would help the future of the human race to a greater extent. A utilitarian may see that national solidarity as a rule, i.e. nationalism is the best system for organizing large groups of like minded peoples. Also, I am pretty sure that in the trolley problem sacrificing yourself is usually not provided as an option, though it would make sense.

    1. In fact it is not surprising at all that, the people who choose to take the right action (push the fatman) even when it looks bad to outside observers, would also be willing to take the right action (stop subsiding failed systems) even when it looks bad to outside observers.

      1. You apparently aren’t a very careful reader. “This lack of association remained even when the utilitarian justification for such views was made explicit and unequivocal.”

        Besides, all you’ve done is take a strawman on one of four conditions, and ignored the other three, to which your comments don’t apply.

  4. A problem with these “rescue” experiments is their lack of realism. In real life emergencies most people would act intuitively. In the trolley disaster I’d probably just jump out of the way and watch what happens with my hands clutching my face 🙂

    Ethical philosophy is obviously of more value when applied to issues and situations where a carefully deliberated position is both practical and needed. I distrust utilitarianism because it tends to present morality as a set of quasi-mathematical rules, rather than as an important aspect of human social nature that can be refined by making it more rationally demanding.

    1. The other obvious problem being that they assume perfect information about outcomes depending on actions taken – which is in most cases unrealistic, and where I think a lot of the intuitive discomfort around utilitarianism comes from. It doesn’t mean its’ not a very useful framework in a range of settings however.

    2. The other obvious problem being that they assume perfect information about outcomes depending on actions taken – which is in most cases unrealistic, and where I think a lot of the intuitive discomfort around utilitarianism comes from. It doesn’t mean it’s not a very useful framework in a range of settings however.

      1. Agree! “Fatman” seems designed to eliminate the real sticking point here, that you do not know, cannot know, with complete certainty, the result of any action. The trolley’s on a track, no swerving, the men are tied down, no escape, the man is fat, fat enough to stop a trolley! But… maybe the man skipped breakfast, and is too light to fulfill his role, but squishy enough to die. Maybe your aim is off and he misses the track, but breaks his neck. Maybe your situation analysis skills aren’t up to snuff and the trolley’s on a different track, or those guys are just faking for the thrill of it, or maybe they’re lifelike mannequins carefully posed by a philosophical vigilante to pull a little prank on passing utilitarians.

        And even if you, personally, do happen to be possessed of absolute knowledge, or at least you’re really good with these speeding trolley problems, consider the effect of the whole world being filled with other people, fallible people, enthusiastic people, all looking to better the world by killing a fat man. And when the fattest are all gone — and they will go fast — well… better have a good diet plan.

      2. Agree! “Fatman” seems designed to eliminate the real sticking point here, that you do not know, cannot know, with complete certainty, the result of any action. The trolley’s on a track — no swerving; the men are tied down — no escape; the man is perfectly weighted — fat enough to stop a trolley(!), light enough for easy pushing. But… maybe the man skipped breakfast, and is too light to fulfill his function, but squishy enough to die. Maybe he refuses to cooperate, throwing off your aim, and he misses the track, but breaks his neck. Maybe your situation analysis skills aren’t up to snuff and the trolley’s on a different track, or those guys are just faking for the thrill of it. Or maybe they’re lifelike mannequins carefully posed by a philosophical vigilante to pull a little prank on passing utilitarians.

        And even if you, personally, do happen to be possessed of absolute knowledge, or at least you’re really good with these speeding trolley problems, consider the effect of the whole world being filled with other people, fallible people, enthusiastic people, all looking to better the world by killing a fat man. And when the fattest are all gone — and they will go fast — well… better have a good diet plan.

  5. Likewise, the folk believe it is right to sacrifice their own life, but wrong to sacrifice the fat man. Again, these should be symmetrical if morality is impartial. Either it is right to sacrifice both yourself and the fat man, or it is wrong. Morality has no eye to who is involved in a moral dilemma.

    While it morality may not be concerned about who is involved in equivalent acts, sacrificing yourself and sacrificing another can’t really be considered equivalent, can they?

    1. I had the same thought. It seems to me that one could be required to throw oneself in front of the train, but that you wouldn’t be permitted in forcing someone else to make that choice (assuming they, and not you, were the only one able to stop it, as the fat man example assume (which is why the man in the example is fat)). Just as I think I am required to donate significant amount of my money to important charitable causes, but I don’t think I’m required (and probably not permitted) to steal money from others to donate to important charitable causes.

      1. “but I don’t think I’m required (and probably not permitted) to steal money from others to donate to important charitable causes.” OMG .. this simple statement is incomprehensible to some of my smugly pious Democrat friends. They truly seem to morally equate giving one’s own money with that of giving away the money of others .. and thus celebrate politicians who do that as saints. They are probably more utilitarian than moral.

  6. Utilitarianism requires that you sacrifice your life to provide organs to save 7 or 8 lives.

    That’s illegal. It makes no sense to frame the demandingness objection in ways that are either not realistic, or that have no realistic net-positive consequences.

  7. Interesting article, Julian.

    I was wrestling with this problem myself for a while, then it dawned on me that I was perhaps making a fundamental mistake… I was assuming that morality could be boiled down into a singular theory, like Utilitarianism. Sidgwick’s Prudence vs morality kind of hints at the idea that there are other competing values, that should rightly be called moral values, that are in play. I think Prudence is a kind of moral act (and this goes all the way back to Epicurus), as well as utilitarian morality. Inevitably they will conflict, but that’s the nature of a pluralism.

  8. Thanks for all those comments. The main point of the study, Christian, was to show that a dominant way of identifying people who are utilitarians in experimental philosophy – the trolley problem – does not come close to capturing a full utilitarian mindset, involving impartial benevolence. Of course, these studies do not measure any highly relevant real life behaviour. But in so far as they are meant to measure what people say or do in small scale contexts, they need to better capture the full breadth of utilitarian commitments. As for it being illegal to sacrifice your life for others, that is not true. Dominic Wilkinson and I have a paper in Bioethics called Organ Donation Euthanasia – it would be legal in Holland to do this and I think we cite one relevant case from Belgium. Even in jurisidictions that ban euthanasia, you could sign an advance directive that would stipulate when life sustaining treatment would be limited to enable you to be a non-beating heart donor. This might actually save fewer lives, but still some. As for my musings about the nature of morality, they were really personal reflections rather than solutions to great philosophical questions. And Tamler, I really need to watch my wife’s back as she crosses bridges – I don’t want her throwing herself off to save those 5 strangers… Oh- just saw Duncan already said that- thanks, Duncan

    1. “As for it being illegal to sacrifice your life for others, that is not true. Dominic Wilkinson and I have a paper in Bioethics called Organ Donation Euthanasia – it would be legal in Holland to do this and I think we cite one relevant case from Belgium.”

      But you already have to be on life support or otherwise qualify for euthanasia, right? You can’t just travel there from foreign countries as a physically healthy person and make the altruistic sacrifice. Right?

  9. Some Qs from an amateur:
    1.Why is it a Fat Man?
    2.Why do 5 workmen out weigh (sorry) one Fat Man?
    3.What if the Fat Man were a pretty young female and her infant from your ethnic group and the workmen were clearly from a different one?
    4. What if the Fat Man were a noted concert pianist/philanthropist/philosopher and the 5 were an encampment of winos?

  10. Thanks for your post, Julian
    You write that “if morality is meant to be impartial, perhaps the right thing to do is to be utilitarians. It is just that we are too selfish and self-absorbed.”. I wonder whether this demand for impartiality is actually true.
    Is it just selfishness or self-absorption to give a birthday present to one’s child rather than to a random stranger ? Or to dedicate love and care to one’s partner rather than to someone we’ve never met before ?
    What would one think of a person who, on being told that their mother had just been hospitalised, replied that their duty was clear : to go visit some other person in a nearer hospital ?
    The problem, it seems to me, is not that utilitarianism is too demanding, indeed it is hard to think of a serious moral theory that is not demanding in some way. The real problem is that it ignores a large part of what constitutes human beings. We are not abstract isolated entities whose ethics can be determined by “English shopkeepers” and their actuarial slide-rules.

    1. There are all sorts of utilitarian (as well as personal, morally neutral) reasons for paying special attention to one’s own child (or visiting one’s own mother rather than a stranger, etc). You have different relationships with the contrasted individuals in these cases, so the questions (whether to visit your mother or a stranger, etc) are not symmetric. For example, it will mean a lot to your child to get a present from you, but won’t mean much to a stranger. Etc, etc.

      Also, I don’t see in what sense utilitarianism construes people as ‘abstract’ or ‘isolated’.

  11. Thanks for your reply Tyle
    My first point was that impartiality is not justifiable. Even if you call it assymetry instead of partiality, I think you are in fact agreeing with me.
    As for utilitarianism, I quote from Julian’s original post : “When Jeremy Bentham introduced utilitarianism in the 1700s, it was a radical, revisionary and welcome new moral theory. Its core was human equality: each is to count for one and none for more than one. Until that point, princes counted for more than paupers. But utilitarians such as Bentham argued that every person’s well-being and life counted equally. The right act is the act which maximises well-being, impartially considered”.
    To use a theory based on impartiality to justify partiality seems a little contradictory.
    When I state that utilitarianism tends to view persons as abstract and isolated, what I mean is that by basing itself on total impartiality it forgets that most of us live in a web of significant social relationships. If I look at trolley problem discussions or those on altruism (such as the comments on Roger Crisp’s last post) I see humans treated as lone ciphers in a world of arithmetical calculation. Hope that this explains my remark.

  12. This reminds me of what’s sometimes called in psychology the “what the hell! effect”, where someone decides that since they’ve failed in one area it’s no longer worth trying in any of the others. But that’s obviously not good reasoning. There are no perfect human beings, says utilitarianism. But don’t most other moral systems say the same? Do you really insist that an adequate ethical system must lack failure conditions? If so, why? Despite the impossibility of being a perfect utilitarian, it’s still worth being utilitarian at the margin, taking whichever actions you can force yourself to do to that will make the world a better place.

  13. Take me I say! Just don’t tell me. I think it’s possible that ‘most’ people would agree to be pushed behind the ‘veil or ignorance’ on the basis that they are much more likely to be born one of the five (or more) people on the track, than the big man (or any other like scenario one might conjure up). Just as they would agree to having cars in this world, despite the fact that cars could cause their death, but it’s very unlikely. They would probably also support a 100km speed limit too, instead of 10km, due to the large collective payoff.

    Julian, I suspect that while you yourself couldn’t and shouldn’t be expected to throw yourself on the tracks for a compete stranger (neither could I)- you would nonetheless be happy to take a lottery ticket to see who should have to jump given the extremely small probability that the scenario would ever occur, and that you should find yourself both very large and hanging over a bridge.

    Provided the situation is a ‘freak one’, and more or less randomly distributed in the population at large (not like the popular surgeon scenario which has the potential to undermine people’s trust in the medical system), then I think it’s probably right to push the fat man. There is no fat-man sacrificing institution that we need to be concerned about undermining as there is in the surgeon case.

    Of course, if having pushed the fat man, the person felt nothing (5 is better than 1 – calculus checks out – job well done) and then nonchalantly sits down to enjoy a tasty bacon sandwich, this would be disturbing indeed. The act is the same of course (what is done is done), but the ‘pusher’ in this case has revealed themselves to be a rather callous individual (perhaps a psychopath). We have good reason to be concerned that this individual might decide to apply such ruthless decision-making to other situations which dont satisfy the freak-scenario rule. Situations that might undermine cooperation or important social institutions.

    It may be ‘right’ to push, but it should never be easy to do so; and this, I think, is part of the moral mix here…not sure if this is a coherent view, but I’d be interested in peoples thoughts?

  14. I think Sean O hEigeartaigh makes the vital point, which is that these scenarios require more information than a person could reasonably be expected to have. Indeed, I would go further, and say that the whole concept of the hedonistic calculus requires that an agent have more information than a human being could possibly have. As such, utilitarianism is not an ethical theory at all, inasmuch as it cannot develop a set of criteria for judging human behavior. Its only possible use would be as a theodicy, a means of justifying the behavior of a supernatural being who is either omniscient or at a minimum radically better informed than humans can be.

    Perhaps it is too much to say that utilitarianism is possible even as a theodicy. To make a theodicy go, one must grant, first, that a supernatural being exists, second, that that being is in some profound sense better than we are, and third, that the actions of that being require moral justification. None of these premises would appear to be particularly secure. Moreover, an attempt to use utilitarianism to justify the acts of whatever supernatural being we have posited would immediately run into a variety of other problems, some of them quite severe. Most obvious, perhaps, is the stubbornly ambiguous concept of “pleasure” at the stem of all theories of utility. I for one can think of no reason why a utilitarian theodicy would have an easier time meaning one thing at a time by this word than the attempted utiltarian philosophies of the last two centuries have had. Furthermore, the implications of conceding the existence of a supernatural being whose knowledge is radically superior to ours would seem to be rather wide-ranging and to call for a rethinking of the concept of rationality on which Bentham et al were trying to elaborate. So perhaps the time has come to discard utilitarianism altogether.

    1. Uncertainty only creates a problem if it allows individuals or institutions room to bias decision-making. As utilitarianism is not risk or inequality averse with respect to welfare, maximizing expected social utility is trivially optimal. Of course when applied systematically as a policy rule, the variance in outcome diminishes as a ratio of the expected gains as the projects become large in scope and in number.

      There is a large and growing empirical literature that can help us make informed policy decisions based on utilitarian ethics. We now know quite a lot about what makes people more or less happy and it is not hard to apply these insights to design welfare improving policy interventions.

  15. I don’t know Professor Savulescu’s wife, but I expect that she overestimates her willingness to see her husband sacrifice his life for the welfare of others, if he had the opportunity to leap in front of a train and save other’s lives. We need to be suspicious of this kind of ‘obligations’ that require so much of us.

    As Professor Savulescu notes, “Because you could improve the lives of so many, so much, utilitarianism requires enormous sacrifices.” This characterization understates the logical requirement of utilitarianism, understood as a responsibility to behave in ways that maximizes the happiness of all. It’s hard to see any upper limit to one’s responsibilities toward others in a general sense. Now, Mill famously set out a standard by which people should not intrude into the lives of others, even for their own good. So, if my neighbor wants to spend his money betting on horse races, to his own economic despoilment, well, I don’t have any warrant to intrude in that choice, except maybe to offer counsel. But I couldn’t do anything to stop him from spending his money that way.

    Other than putting up a barrier to interventions of this kind, utilitarianism can absorb pretty much all my other efforts and choices, at least those efforts and choices that might be understood as contributing to maximizing happiness. Take a nap? Not, if that nap time could be used to help at the local soup kitchen. Take a vacation? Not if the vacation requires carbon-polluting means of transportation. Read a book? Well, again, there are lots of other uses of that time that could benefit more people than benefit from that solitary enterprise.

    Savulescu points out that this approach to ‘obligation’ does not map on to our actual psychology. In a catastrophe, we would act to save those people closest to us, rather than to save strangers, even if we could save more strangers. In regard to the trolley problem, Professor Savulescu’s wife said that the right thing to do would be to throw yourself in front of a speeding train, in order to save lives. But that’s where Utilitarianism starts to unravel as an account of our moral obligations.
    As I say, I don’t know Professor Savulescu’s wife, but I doubt she would take heart in the indeterminacy that would ensue if – each day – he left the house prepared to sacrifice himself for any benefit greater than the continuance of his own life, as judged by him in moments of extremity. In fact, what would it be like for anyone to live in that kind of indeterminacy: knowing that a duty of rescue might require one’s noble suicide, or if not one’s death, then one’s pretty much limitless charity to others. Or think about all those wells one could have dug or shelters one could have put up during the time one spent pursuing advanced educational degrees.

    Not only is there in Utilitarianism, then, no upper limit to what might be asked of everyone, if it practiced vigorously, Utilitarianism would introduce a great deal of indeterminacy in our collective social lives, because we would seem to be always required to set aside prior commitments and established relationships. Social glue – in relationships, commitments, and contracts – seems to require more stability than Utilitarianism, with its limitless obligations – can provide.

    I, for one, want to see some principle of self-protection, self-preservation, self-development that is hard to locate in Utilitarian theory. That is, a principle that doesn’t leave us all feeling guilty at the end of every day that we are still alive because we haven’t thrown ourselves in front of a speeding train to save others, a principle that ensures that – ordinarily – we will come home at the end of every day to the people to whom we are committed.

  16. In Mill’s second chapter on Utilitarianism (the only one I have read, so my idea may be incomplete), he says:

    “Let us now look at actions that are done from the motive of duty, in direct obedience to ·the utilitarian· principle: it is a misunderstanding of the utilitarian way of thinking to conceive it as implying that people should fix their minds on anything as wide as the world or society in general. The great majority of good actions are intended not for •the benefit of the world but for parts of the good of the world, namely •the benefit of individuals. And on these occasions the thoughts of the most virtuous man need not go beyond the particular persons concerned, except to the extent that he has to assure himself that in benefiting those individuals he isn’t violating the rights (i.e. the legitimate and authorised expectations) of anyone else. According to the utilitarian ethics the object of virtue is to multiply happiness; for any person (except one in a thousand) it is only on exceptional occasions that he has it in his power to do this on an extended scale, i.e. to be a public benefactor; and it is only on these occasions that he is called upon to consider public utility; in every other case he needs to attend only to private utility, the interest or happiness of some few persons.”

    It seems that Mill says doing something like sacrificing yourself to donate organs and save 7 or 8 lives is not what Utilitarianism asks. Rather, most people should not have to consider the greater good and lives of complete strangers unless they are generally expected to do so, for example in the case of a President. I don’t think most people in society would think it reasonable to donate organs in such a manner – perhaps in the case of an early death, but not out of the blue.

  17. Utilitarianism as personal morality describes an ideal, not necessarily a realized behavior. A person’s belief that sacrificing themselves to save the five is the right thing to do (in this example, anyway) is what makes them a utilitarian, not whether they actually could or would do such a thing. The goal with any ethical framework is to live as close to the ideal as you are personally capabale and willing.

    A Christian ideal, for example, is to turn the other cheek when assaulted. Not turning the other cheek doesn’t immediately revoke a person’s Christianity, it’s simply a failure to live up to that particular ideal.

    Personal morality is identifying your ideals and then actively moving towards them. Hence the cliché It’s the journey (of moving towards your ideals) not the destination that matters.

  18. Sacrificing yourself is not the same thing as sacrificing someone else. In the latter case you are also committing murder, which in general is evil. Killing yourself is not evil in the same sense. Voluntary self-sacrifice is compatible with the non-aggression principle (NAP), but forced self-sacrifice is not.

  19. I’ve always rejected the fat man scenario and simply stayed with the switch track debate (1 on the right 4 on the other) for a simple reason. There is no guarantee that a fat man thrown in front of anything Will stop the moving object. Your only testing a theory by pushing someone on the tracks. It’s not accurate. No one would push a person in front of an out of control truck to stop its rampage, not because they are a utilitarian but because there’s no guarantee that it would work.

    A utilitarian doesn’t waste lives.

    A sacrifice for the greater good is the key goal. Acknowledgment that you can’t save everyone and that you have a duty to attempt to lessen the damage done by the world in front of you. A choice to act and try and help as many as you can with the preconceived notion that you can’t save everybody.

    The needs of the many. Outweigh the needs of the one or the few.

  20. Anthony the Thinker

    I suggest reading J.S Mill’s utilitarian account of human rights. simple as that. further, utilitarians have a very neat trick, the right answer to the problem can always be called utilitarian. If everyone gets extremely unhappy at having to sacrifice their own child to save 2 strangers…then that’s not very utilitarian is it? I know, you’re probably stuck on Bentham utility…but utilitarian is a lot more sophisticated than that now a days….

  21. Great discussion, I particularly liked Timothy Murphy comment on the need of principles of self-protection, self-preservation, self-development etc. in utilitarian theory. I don’t know of what theories have been proposed in this regard, but since (act) utilitarian calculus is often impractical or impossible in common situations (we don’t have the time, cool mindedness or complete information), we need heuristic principles, rules, values that can guide us through uncertainty, social institutions, allow us to compare weights and help us avoid biases. Hare proposed two level utilitarianism to deal with the practical reasoning problem, but I think there is a need for a more serious and somewhat consensual way of creating this intermediate “rule level” to make utilitarian practice more effective (if you think there already are good proposals in this direction, please let me know). It would also have the good side-effect of making it easier to reply to common objections like the transplant problem.

    About the demandingness, I only see two solutions: either we accept that prudence/self-interest is a legitimate fundamental ‘moral’ value (which I believe would require a different foundation from usual utilitarianism, perhaps based on some theory of individual meaning of life) or we accept that utilitarian conduct is the best moral one but we (or most of us) are psychologically unwilling to be so moral in sacrifice of our other conflicting personal motivations. The first one would have the advantage of incorporating self-interest as legitimate, allowing us to somehow consider how important it is (depending on what justification is provided by the theory). The second option seems to be the most common one in moral theories, making moral conduct as an ideal that often conflicts with our personal motivations, which makes us less moral, it would be utilitarian though to find how to make us less bothered by our non-utilitarian personal motivations and feel less guilty about it.

    I believe that incorporating self-interest would take us closer to descriptive moral theory, in which we could somewhat predict how psychological and social factors shape moral action, and see things in a more naturalistic way.

  22. What about the role of an uncertain world? One should surely throw themselves off the bridge if fat enough, but one should not donate ot kidneys because they know more about the consequences of their losing a kidney than about the other person gaining one.

  23. It’s understandable that the discussion centers on killing and being killed in the service to utilitarian philosophies, and how that might make people uncomfortable (which lowers their utility!).

    However, don’t forget that another thing utilitarian philosophies generally allow – more often than killing, in fact – is deceit, and not just of the ‘lie to the killer at your door’ kind which people like to bring up very often even though, in my opinion, it’s often not very relevant.

    In fact, I think that’s the kind of deceit most people would not strongly object to – it would not really lower their utility (unless they planned to go on killing sprees!), and as it’s a situation where many also feel violence would be justified anyway.

    However living in a strongly utilitarian society in my opinion would also mean having to constantly worry about people lying to you “for your own (or society’s) good”;

    While not as unpleasant as being killed, I find this thought way more realistic (so, in some ways, worrying), in that people already do this, even if they do use have complex philosophical systems to justify it.

    On the other hand, I suspect I place a higher value on truth than most people; some people definetely seem to be ok with being lied to on.

    On the OTHER other hand, it seems people don’t have a lot of tolerance for governments lying to them these days.

    What do you think?

  24. Some good points but I think when people discuss utilitarianism, it is normally with an emphasis on the idea that governmental decisions / public policy should be based on utilitarian considerations, rather than that every individual should (or even *could* act in a perfectly utilitarian manner.

Comments are closed.