Could the fact that someone is more scroogelike – less willing to sacrifice for the sake of doing good – entail that morality is less demanding for her? The answer to this question has important implications for a host of issues in practical ethics, including issues surrounding adoption, procreation, charity, consumer choices, and self-defense.
If Scrooge gave away just a few pennies, let’s suppose he would suffer a big loss of well-being; let’s suppose that for Teresa to suffer a comparable loss she would have to give until she were herself nearly penniless. Arguably here morality demands less money from the more scroogelike, though one could claim that it’s no less demanding, since arguably one’s own well-being (or anyway something besides money) is the relevant currency to be demanded by morality. Perhaps this is wrong, but never mind that. (Issues about the currency of demandingness, as they relate to scroogelike characters, were helpfully brought to my attention by Christian Barry during a lecture I gave recently at the ANU, and by Brian McElwee in the paper linked to above.)
I want to ask a slightly different question: does the fact that Scrooge is more scroogelike – less willing to sacrifice his well-being (or whatever the relevant currency is) for the sake of doing good – than Teresa plausibly entail that morality is less demanding of him? Does morality require less of Scrooge’s well-being (or whatever the relevant currency is) for the sake of the overall good than what it likewise requires of Teresa, in virtue of his greater scrooginess?
There are some extreme internalists who will take this to be an easy question. They’d say the obligations you are under depend wholly on what your actual desires or other pro-attitudes are (or would be, were you fully informed), so obviously morality will be differentially demanding for the more or less scroogelike. But this position implies that, if you happen to be totally indifferent to whether a child bleeds to death on the side of the road, morality doesn’t require you to virtually effortlessly use your cell phone to call an ambulance. To put it politely, that’s counterintuitive. There are in fact several subtler and more plausible routes to the conclusion that morality is less demanding for the more scroogelike. I’ll now sketch one.
Here’s a little story, to kick things off: There’s been a terrible earthquake. You’re willing to incur some quite intense pain in order to pull Tiny Tim from the earthquake wreckage, saving his life but leaving him without the use of one of his arms. Suppose the pain to you is so intense that it’s morally okay for you to do nothing, allowing him to die. Still, it would be better, from an impartial point of view, if you took the pain and saved the boy. Just before you pull Tim from the wreckage, a knowledgeable bystander whose biceps are much less developed than yours points out that you really have two different life-saving options here: you’ve been focusing on PULL LEFT, saving the boy’s life but leaving him without the use of one of his arms, but you could instead PULL RIGHT, at no greater cost to you saving the boy’s life while preserving the functionality of both his arms. You become convinced that PULL LEFT is morally wrong, given the presence of the option to PULL RIGHT. And, in order to avoid performing that morally wrong act, rather than save Tiny Tim via PULL RIGHT, you opt for DO NOTHING.
What’s going on here? Is there anything wrong with, or irrational about, your behavior? Are you blameworthy at all? To review, here are the three things you could have done:
DO NOTHING, incurring no pain, allowing the boy to die.
PULL LEFT, incurring intense pain, saving the boy such that only one of his two arms will be functional.
PULL RIGHT, incurring the same intense pain, saving the boy such that both of his arms will be functional.
Prior to learning of PULL RIGHT, you were as a matter of fact willing to take the pain and PULL LEFT, and after learning of PULL RIGHT’s existence, you opted for DO NOTHING.
For the moment, let’s “factor out” the information that you would have been willing to take the pain, and PULL LEFT. Without this information about you, it is plausible to claim that DO NOTHING and PULL RIGHT are morally okay, whereas PULL LEFT is morally wrong. Again, I am assuming that it is morally okay not to incur the pain in order to help the boy, even if this would be better impartially. If we take this sort of supererogation seriously, then it along with the plausible principle that it is wrong of you to do much less good (for Tiny Tim) if you could have done much more at no extra cost to yourself (other things being equal) together imply that DO NOTHING and PULL RIGHT are morally okay and PULL LEFT is morally wrong. Parfit (p. 131) and several other philosophers defend this kind of view. I defended an analogous claim (about giving to charity) in an earlier post, and not everyone agreed with me; my critics said that it’s implausible that the analog of PULL LEFT is morally worse than the analog of DO NOTHING. Of course, it wouldn’t be morally worse, if these were the only two options. Still, these critics insist they can’t stomach the implication that PULL LEFT is morally worse than DO NOTHING even when PULL RIGHT is an option. Accordingly, they’d have to deny the existence of the sort of supererogation purportedly in play here, or else deny the seemingly plausible “avoid gratuitous suboptimality” principle (stated above in italics). But it seems that many others are happy to retain both these things and thus to accept that PULL LEFT is indeed morally worse than DO NOTHING. Let’s go with that. Whereas PULL LEFT is morally wrong, DO NOTHING is morally okay. PULL RIGHT is morally okay – indeed it’s supererogatory.
Alright, now factor back in the specific information about you, that you were willing to take the intense pain and PULL LEFT if your only options were PULL LEFT and DO NOTHING. Next consider a more scroogelike person. She’s not a monster. She’d endure a mild splinter to save a little boy’s life. However, she’s unwilling to undergo intense pain to save him. Indeed, she is unwilling to take the pain and PULL LEFT when it’s between doing this and DO NOTHING.
It seems plausible to me that you would be open to criticism, if you are unwilling to PULL RIGHT and opt for DO NOTHING in the original Tiny Tim case, in a way that or to a degree to which this more scroogelike person who is also unwilling to PULL RIGHT and opts for DO NOTHING wouldn’t be.
(If you display willingness to help and then don’t help, that can raise expectations and result in disappointment, thus making you more open to criticism than a more scroogelike person who didn’t display willingness to help. But let’s suppose that, in my examples, your willingness to help is not displayed to anyone. Still, the intuition of differential criticism persists.)
In what ways might you, a much kinder person, be more open to criticism than your more scroogelike counterpart? There are different categories under which criticism here might fall: wrongness, blameworthiness, and irrationality. These different types of criticism could be pro tanto or all-things-considered, e.g. it could be that what you did was wrong all-things-considered whereas what the more scroogelike person did was morally okay all-things-considered, or it could be that both forms of behavior were morally okay all-things-considered, but nonetheless yours was pro tanto wrong (there was a moral reason counting against your behavior but not against your more scroogelike counterpart’s behavior). And these types of criticism can be aimed at different sorts of objects: people, their behaviors, or their mental states, e.g. whether you’re willing to take the pain. So there are indeed many ways for this more scroogelike person to be less open to criticism than you.
We may not be able to argue that if you DO NOTHING that that would be wrong in precisely the same way that PULL LEFT would be. Arguably, PULL LEFT is wrong because it’s wrong of you to do much less good if you could have done much more at no extra cost to yourself (other things being equal). Obviously, if you DO NOTHING, it would be a significantly greater cost to you to PULL RIGHT. Perhaps we can argue that it’s wrong of you to DO NOTHING if you could have done much more good by incurring a cost you would have been willing to incur in order to bring about less good. But the trouble here is that similar reasoning may imply that, in the simple choice between DO NOTHING and PULL LEFT, it’s wrong to DO NOTHING if you’re willing to PULL LEFT, but not wrong if you’re unwilling. Is this right? And what if you’re in fact willing to incur the cost to bring about the lesser good but not the greater good? If unwillingness would make it not wrong not to help in the simple choice between DO NOTHING and PULL LEFT, would it also in the case where PULL RIGHT is an option too? This seems to be quite tricky, and drifting into the land of iffy oughts.
Nonetheless I think we can offer greater criticism of you for your unwillingness to PULL RIGHT than we can of your more scroogelike counterpart. We can argue that an agent is less rational, or more blameworthy, if her willingness to incur a cost to help others responds to the factors of a given situation in a clearly rationally or morally inappropriate way. Of course, sometimes one’s willingness to incur a cost fluctuates over time, even holding fixed the factors of a given situation. Often these are cases in which there is uncertainty or indeterminacy about one’s willingness to sacrifice for others, which manifests itself as intertemporal instability of willingness. But we can suppose that here we’re dealing with definite, genuine willingness (which I’ll just continue to refer to as “willingness”).
If one’s willing to make a sacrifice in a given situation, we can ask whether altering that situation in various ways would render it appropriate for one’s willingness to change correspondingly. It’s one thing to be unwilling to PULL RIGHT, because of the pain to you. But if you’re willing to bear the pain when Tim is wearing a green hat, it seems that your willingness to help would be responding to factors inappropriately if you were unwilling to bear the pain when Tim is wearing a blue hat. Similarly, if you’re willing to bear the pain when you can save Tim with only one of two arms functioning, it seems that your willingness to help would be responding to factors inappropriately if you were unwilling to bear the pain when you can save Tim with both arms functioning. Whether we’re here dealing with moral inappropriateness or rational inappropriateness is a question I won’t take up here.
It would appear then that we’ve identified one way in which morality or rationality asks less of the more scroogelike – and more of kinder folks like you.
(This blog post is connected to a paper-in-progress called “Whether and Where to Give” – I am very grateful to many friends and audience members at Oxford, Melbourne, Charles Sturt, and the ANU, for their helpful comments on these and related ideas.)
This is great, Theron. As usual, I’m puzzled by scope issues. A natural way to parse your conclusion that rationality (let’s say) demands more of kind people is that (very roughly) “If you are kind, then rationality requires PULL RIGHT”. But does your argument really support that, instead of “Rationality requires that if you are kind, you PULL RIGHT”?
Following the usual analogy, I wouldn’t want to say that morality requires more of murderers just because it requires them to murder gently. Rather, morality requires the same thing of everyone: if you murder, murder gently.
Thanks Teru! I favor the “Rationality requires that if you are kind, you PULL RIGHT” sort of reading. Putting the requirement outside the conditional will not always secure the result that rationality is just as *demanding* of everyone. Kind people will end up having more well-being (or whatever) demanded of them.
Interesting.
Isn’t this a variation on the concept of the ‘utility monster’, in some ways? Since it works with the assumptions that similar ‘goods’ can have different utility to different beings, and it uses that to reach non-intuitive consequences-..
A thought to supplement what I said below: The point about different resources having different well-being impacts is perhaps most relevant to the “currency of demandingness” issue I mentioned in the second paragraph of the post; this is in some ways analogous to the “expensive tastes” discussion in distributive justice.
Thanks Davide! Yes, I think I see a connection with the utility monster there. However, the specific conclusion I reached, about how kind people are more open to (rational or moral) criticism if they DO NOTHING than their more scroogelike counterparts would be if they DO NOTHING, strikes me as intuitively plausible – whereas I agree that it’s counterintuitive that we should all be fed to the utility monster, if that’d produce the most happiness.
Great article, Theron!
Here’s an additional perspective on why I, if I were willing to PULL LEFT, at the cost of undergoing intense pain, would be more blameworthy than a scrooge for choosing DO NOTHING upon discovering that I could also PULL RIGHT . (Maybe it’s of help.)
Jonathan Dancy draws a distinction between different forms of relevance of moral considerations: according to him, there can be favourers and intensifiers for moral reasons. For example, the fact that I am initially willing and able to help Tim with PULL LEFT (when PULL RIGHT is still unavailable) speaks in favour of helping or gives me moral reason to help Tim with PULL LEFT. Now, once PULL RIGHT enters into the picture, it acts as an intensifier. It intensifies the favourer of my helping Tim, since I could now save both of Tim’s arms at the same cost to myself. I now have even stronger reason to help him, which, if I now instead choose DO NOTHING, makes me more blameworthy (since I was at first willing to respond to the favourer of PULL LEFT). This contrasts with the scooge who doesn’t respond to the initial favourer in the first place.
Thank you Ben! I really appreciate this suggestion, which seems interesting. However, would favourers be enough to derive the conclusion that I would be more blameworthy if I DO NOTHING than a scrooge would be if he did, given our differential willingness to PULL LEFT? (Forget about PULL RIGHT for the moment.) Another way of asking it: how (if at all) would the main conclusion of my post, about people being less open to criticism in virtue of their greater scrooginess, be strengthened by appealing to intensifiers? I can see how this would increase the *degree* to which the kinder person is more blameworthy than the scroogier person, but we can already get the conclusion that the kinder person *is* more blameworthy without intensifiers, right?
Is there a literature on which supererogatory acts are more or less moral? I’d have thought that if DN were in play, then anything above that was also some variant on morally fine. Plus, in the real world, people don’t usually morally criticise samaratans* who save someone’s life on the basis of sub-optimality of welfare gains associated with the life-saving act. I don’t see – after the decision is made – how that helps anyone at all. It certainly wouldn’t help Tiny Tim recover the lost arm functions. (As a great man once sang “it don’t pay to think to much about the things you leave behind.”)
Interesting exception – there are stories about litigous jurisdictions in which people are reluctant to intervene because if they (effectively) play pull left they can be sued. This being the case, and uncertainty being what it is, the incentives favour do nothing. This sort of litigous environment seems to sit badly with people, since it seems to reek of ingratitude and suspicion of victims.
…or how about this. Pull Left saves one person, with whom you have signed a mutual assistance pact; Pull Right saves two people, but you have no special arrangement with them. Total utility people would probably go for Pull Right, but those of us who think other duties can override those considerations might argue for the primacy of Pull Left. [If more generally Pull Left saves m people and Pull Right saves n people, then at some point most of us might think that Pull Right becomes preferable, where n>>m. But whether that occurs at n/m=100 or n/m=10^6 or whatever, seems pretty open to me.] Presumably there’s a trolley problem for this – there seems to be one for every occasion.
Cool, thanks Dave! Quick responses to four things you raised:
On literature, two things you might look at are “Supererogation, Inside and Out” by McNamara, and the Stanford Encyclopedia of Philosophy entry on supererogation.
I wouldn’t say that PULL LEFT is “above” DO NOTHING in the relevant sense. It results in a better outcome from an impartial point of view, but the action is morally worse than DO NOTHING, given the presence of PULL RIGHT.
I agree that it may in some cases be counterproductive to criticize agents who perform analogs of PULL LEFT, though really it’s largely an empirical question (I’m not offering a snobby “it’s *merely* an empirical question” sort of reaction, just noting that there are some empirical complexities to this, which I’d need to study further). I discussed this a bit in the comments of an earlier post: https://blog.practicalethics.ox.ac.uk/2014/10/people-and-charitable-causes-are-importantly-different-things/
The new case you mentioned “…or how about this…” brings in a new factor, a kind of special obligation based on a pact. It’s an interesting factor, because it’s going to be relevant when we try to bridge claims about the Tiny Tim case to more real world examples, like giving to charity. I agree there’s a question about how weighty that sort of pact-based obligation is (if it has any weight at all), and I also agree that it can be outweighed by the saving of a large enough number of other people; my guess is that this number wouldn’t need to be astronomically large. I’m more confident about my guess when things are put in the context of a trolley problem. Suppose a trolley is headed toward 100 innocent strangers – it will gooify them in 60 seconds. One innocent person you have a pact with is rigged up with explosives set to detonate in 60 seconds. You can stop the explosives from going off only if you press a button on a remote control. The only way to save the 100 is by flipping a switch, bringing the trolley to a nice steady stop – at a safe distance from the 100. Unfortunately, you’re standing in between the switch and the remote control, and don’t have time to save all 101. In this case, I get the intuition that it’s OK to go for the switch, saving the 100 over the 1. So, yes, this is further support for your suspicion that there’s a trolley problem for every occasion!
Thanks for the reply, Theron. Even if you think that total utility is the only principle you need, I still think it’s a problem if you assign too low a weight to the special obligation. If it’s clear that you cannot be relied upon to keep promises, that will undermine trust. In iterated games, that matters a lot. [Repeated prisoners’ dilemma games have different, more optimistic solutions than single shot games, for instance.] It may be that if you bring about the sort of society that is populated by people who cannot meaningfully bind their own future actions through contracts (because they over-respond to the utility calculation immediately in front of them), you end up creating a low trust, low productivity, low mutual assistance world. Which presumably is not what you wanted to do.
Part of this is motivated by my on-going concern regarding the conspicuous philanthropy/ethical altruist movement. This concern boils down to: young people who go into stockbroking with the aim of assisting desperately poor people in distant lands should beware of taking for granted the existing set of mutual obligations (with neighbours, including homeless ones) that support their comfortable, reliable, stable and high trust existence. Because without those mutual arrangements there are no stockbrokers.
Thanks Dave, I agree with this, as would most effective altruists – absolutely, it could be instrumentally important to respect (or to act as if one respects) special pacts and mutual arrangements with others.
Respecting at least some of these special pacts and mutual arrangements are likely to lead you in very different directions than effective altruists usually want to go, I suspect.
Hedley Bull once described some “elementary” societal goals, which he seemed to think were part of the essence of an ordered society. The second of these is “that promises, once made, will be kept, or that agreements, once undertaken, will be carried out.” Basically, if you don’t have security, and if you don’t have reliability (truth, Bull called it) and if you don’t have some approximately stable rules around possessions, then it’s hard to see how you can have an ordered society.
I think it’s bad form simply to presuppose this sort of basic order. It may be that a utilitarianism that is focused on total utility ends up defending the sorts of mutual assistance packages that liberal democracies maintain (legal system, tax & benefit system, private property rights and so on), such that an effective altruist can pay their taxes and obey their laws in partial fulfilment of their end of that bargain that sustains the order they enjoy. But, as with a thesis, “partial fulfilment” is a phrase that can contain lot of different versions of the word “partial”… the ethical altruists I hear talk seem to think that paying their taxes and obeying laws is pretty much it as far as their obligations to fellow citizens (etc) goes. This contrasts with most major political movements, which emphasise duties beyond our legal requirements – parties of the left often champion an enlarging of those duties (esp. tax & benefit) and parties of the right often emphasise duties of charity within borders to deepen social cohesion. The idea that the social contract as it stands is sufficient and that the real action is elsewhere is fairly niche.
Interesting points, Dave. Again I suspect effective altruists would be very open-minded about what sorts of activities in fact do the most good (in total utility terms), so if one had good evidence that this was channeling money to deepen national social cohesion, rather than (say) giving to distribute medical care to people in extreme poverty, it would be very welcomed as part of the ongoing discussion about effective giving.
in the case of the little boy:
for some people the situation decides of one’s willingness to help , one does not think about the pain, and chooses the best way: pull right
for others the pain decides of one’s willingness to help, choosing to do nothing.
I think the situation should be always the decisive factor. Saving life situation is different from giving money to a beggar..
“Of course, sometimes one’s willingness to incur a cost fluctuates over time, even holding fixed the factors of a given situation” , this is the natural pace of human behavior when giving.
For hundred thousands of year human behavior was focused on survival; social behaviors and altruism are relatively new, so being reluctant to give is much more spontaneous and human then we think.
Altruism requires reasoning.
Thanks Bahijeh, interesting points. I’d certainly agree that one’s willingness to incur a cost to help others varies from situation to situation, and even within a given situation; but whether such variation is *open to criticism* is a further question. A change from definite, genuine willingness to help to unwillingness to help is open to criticism when it’s done on the basis of morally/rationally irrelevant differences between the situations (e.g. Tim is wearing different colored hats). Your comments about evolution and altruism made me think of *The Expanding Circle: Ethics, Evolution, and Moral Progress* – you might enjoy reading this book.
Comments are closed.