Skip to content

The Ethics of Giving:  How Demanding?

How much of your money should you give to effective charities?  Donors are often made considerably happier by giving away substantial portions of their income to charity.  But if they continued giving more and more, there’d surely come a point at which they’d be trading off their own well-being for the sake of helping others.  This raises a general question:  how much of your own well-being are you morally required to sacrifice, for the sake of doing good for others?  I’m currently in Australia giving some talks on the ethics of giving (at the ANU and at CAPPE in Melbourne and Canberra), and have been thinking about this topic a bit more than usual.

 According to utilitarianism and several other forms of consequentialism, everyone’s well-being, including yours, gets exactly the same weight.  These views entail that there is no limit, in principle, on how much of your own well-being you could be called upon morally to sacrifice, for the sake of doing good for others.  If, for example, the only way to stop a runaway trolley from running over and killing one person and crushing another person’s hand were for you to dive directly in front of the trolley, sacrificing your own life, then these consequentialist views would imply you are morally required to do this (assuming this act would result in the most well-being, of all available acts).  Many authors claim that morality is not this demanding, and defend the existence of agent-relative permissions or reasons.

 Suppose, in Singerian spirit, we are seeking a relatively modestly demanding principle that would nonetheless yield powerful implications about the ethics of giving to effective charities.  Consider:

       (P)  Whenever you can do a lot of good for others, at only a small cost to yourself, you should do so.

By “should” here I mean that you are morally required to do so, that failure to do so would be morally wrong.  And by “cost” to yourself I am referring to the cost in terms of your well-being (or your desires or projects).

 (P) entails that you should save a drowning child, if the only cost to you is muddied shoes.  It also has, when iterated, very significant implications about giving to charity.  Suppose today I receive an email from Deworm the World saying that if I give them $100, this will result 200 children having their parasitic worms removed.  Here, for a small cost to myself, I could do a lot of good for others, and (P) would imply that I should do so.  But suppose tomorrow I receive a similar email from Deworm the World.  Again (P) would imply I should send them $100.  And again, for the day after tomorrow, and so on.  Once I am poor enough that the loss of yet another $100 constitutes more than a small cost to me, (P) will no longer imply that I should make another donation.  But for each possible donation up to this point, (P) implies I should make it.   And it may be that by the time I get to this point, my overall well-being level is much lower than it would have been if I had not made this series of donations.  Accordingly, many object that (P) is still too demanding.

 Cullity thinks the problem with (P) is that it doesn’t take into account past sacrifices, and that if one’s past sacrifices together with the further cost one is considering incurring added up to a sufficiently large amount of well-being (comparable to the amount of well-being one would lose if one suffered a “serious long term injury”), then one would not be morally required to incur this further cost, even if it were small, and even if it would bring about a lot of good for others.  But suppose that, while I’ve sacrificed a lot of well-being in the past (far over the “sufficiently large amount” Cullity has in mind), I have also been, and will continue to be, extraordinarily well off.  In this case it seems plausible that if I could do a lot of good for others by making a small sacrifice, I should; it would be wrong not to.  Perhaps what Cullity meant, or anyway should have meant, is that one is not morally required to act in a way that takes one’s lifetime well-being level below a particular critical threshold (analogous to Crisp’s proposal about practical reason).  What’s the threshold?  One could claim it’s uncertain, or indeterminate, where it is.  Let’s consider a principle less demanding than (P):

       (P*)  Whenever you can do a lot of good for others, at only a small cost to yourself, you should do so, provided that paying this cost doesn’t take your lifetime well-being level below a critical threshold.

How demanding (P*) is depends on how high the critical well-being threshold is.  But notice that, no matter how high the threshold is, (P*) entails that there is no limit, in principle, on how much of your own well-being you could be called upon morally to sacrifice, for the sake of helping others.  For instance, imagine that each day you receive a large sum of money, and holding onto half for yourself would keep you above the threshold.  If you also spent the other half on yourself, you could further increase your well-being.  But it is not hard to see how (P*) would dig into these extra funds; like (P), (P*) can be iterated.  As long as you could do enough good for others with these extra funds, (P*) would imply you should give them up.  Insofar as (P*) isn’t overly-demanding, and (say) utilitarianism is, the fact that the latter recognizes no limit in principle on how much of your well-being you could be called upon morally to sacrifice isn’t what makes it overly-demanding.

 So (P*) seems a relatively modestly demanding principle that would nonetheless yield powerful implications about the ethics of giving to effective charities.  Depending on what the critical well-being threshold is, and where you stand with respect to it, (P*) could morally require you to give away a lot of your money to charity.

 But is this principle demanding enough?  What if you were just above the critical threshold, and could sacrifice a tiny bit of your well-being in order to prevent a comet from causing the extinction of all sentient life on Earth?  Moreover, suppose you had some guarantee from an omniscient being that this would really be the very last time you’d ever be called on to make any sacrifices for others.  (P*) would imply you’re not morally required to make this sacrifice.  This suggests to me that (P*) is too absolutist about slight tradeoffs of well-being near the critical threshold, and that we should think about a reformulated principle (P**) that avoids this feature.  But then, insofar as (P**) isn’t overly-demanding, and (say) utilitarianism is, the fact that the latter recognizes no well-being threshold beyond which you are never morally required to bring yourself below (for the sake of doing good for others) may not be what makes it overly-demanding.

 If utilitarianism and other forms of consequentialism are indeed overly-demanding, in virtue of what are they overly-demanding?  And what’s the best alternative principle, if not (P) or (P*)?  In approaching these questions, we should bear in mind the reasons why several authors – including Kagan, Unger, and Arneson – have argued that it may be impossible to find a defensible principle that isn’t highly demanding.  At any rate, as with nearly all areas of ethics, it is probably impossible to find a principle here that wholly avoids counterintuitive implications.

 I’d like to thank Peter Singer for a useful discussion yesterday, which coincidentally was the day of the release of his new book The Most Good You Can Do.

Share on

6 Comment on this post

  1. Without adding anything to the actual argument, I would draw attention to the Buddhist tradition that extreme poverty can be equivalent to, and indeed the only real means for, “true” happiness. It’s said that such a level of poverty (presumably so long as you are capable of continuing to survive with discomfort) frees you from cares of any kind, per this story:

    The Buddha and his followers were picnicking on the side of a road, laughing and enjoying each other’s company. Then a farmer walked by, much in distress, and asked the group if they had seen a cow wander by. “We have not seen a cow,” said the Buddha, “but you might try heading down to the river to the south; a cow might wander in that direction.” The farmer, much in distress, said- “Thank you, sir. I envy you priests; not only have I lost my cow, but the fences on my property are rotting, there is a hole in the granary which I cannot repair, and which lets vermin in, and I fear that all this will lead to the ruin of my family.”

    After the farmer left, the Buddha said to his followers – “See how unrivalled you are in happiness, you who have nothing in the world. If you were like that farmer, with many cows and other goods to your name, how many more would be your cares and your worries, and how difficult it would be for you to enjoy life.” And then with a smile, he said, “That is why you must learn the fine art of ‘cow-releasing.'”

    I think that we can lose MUCH MORE than the average person would suspect and still be happy, with the proper frame of mind.

    1. Thanks Caley. This sort of Buddhist critique of consumerism is echoed in the movie *Fight Club* (e.g. “The things you own end up owning you” and “It’s only after we’ve lost everything that we’re free to do anything”). There is some truth here, but there are limits on how many cows it’d be good for one to release, so to speak. The peace of mind and freedom one gets from having less stuff is good, but then so is health, nutrition, protection from violence, etc (things people in extreme poverty don’t have). Also note that the first link of my post is to an article on the connections between money and happiness – presumably relevant to what you’re gesturing at here. One interesting snippet, “If experiential sampling provides a more objective measure of wellbeing [than self-reports of life satisfaction], then money buys less happiness than most studies indicate.”

  2. The idea of a critical well-being threshold makes sense. But it does not have to be framed in moral terms; we can frame it in psychological realism terms: The critical threshold of well-being is the one below which you cannot moralize other people to go; and for yourself, below which you cannot realistically commit yourself to go without motivational breaking. Both of these might be quite high.

    In utilitarian and practical terms, this suffices completely.

    Other considerations:

    – The cost-effectiveness of even the most effective charities is debatable.
    – Even if morality is demanding and we could commit, it is possible that we simply do not want to be that moral. We can choose to be less moral than we could be.
    – I still don’t know whether “a comet from causing the extinction of all sentient life on Earth” would be a good or bad thing according to utilitarianism. Suffering is quite frequent and severe, and humanity will almost certainly stay orders of magnitude below the maximum efficient generation of wellbeing, which is supposed to make up for it.

    1. Thanks Hedonic Treader. It’s largely an empirical question how high the relevant well-being threshold (to be built into the standard for blaming others and the aim for oneself) implied by psychological constraints plus utilitarianism, would be. It could turn out in fact to be low, in which case perhaps some of the concerns about demandingness may resurface (and of course there would remain hypothetical cases in which utilitarianism would imply very demanding conclusions). Moreover, owing to psychological differences across people, utilitarianism might imply different such thresholds for different people. I take it this would introduce a few further complications and issues.

  3. Thanks, Theron. This is a very clear and helpful post. I have a question about how to describe P**. Isn’t P** really just going to be a revised version of P*, with a lower threshold? (I.e. a critical threshold view allows you to put the threshold as low as you like — until the view turns into utilitarianism, of course.)

    1. Thanks Roger. I was thinking that P** would not have the “absolutist” feature that P* has. On P**, there is *some* amount of good for others such that you should make a tiny sacrifice of your own well-being in order to bring it about. One possibility is to think of P** as a refinement of P, according to which, “Whenever you can do a lot of good for others, at only a small cost to yourself, you should do so.” But perhaps on P** what counts as “a lot” would increase, the worse off you are in absolute terms (perhaps as a result of your sacrificial acts). Another sort of principle would say that you’re permitted not perform an act that would bring you from some particular high well-being level all the way down some long distance to some particular low well-being level (no matter how much good for others this would do), but, for the sake of doing a lot of good for others you may be required to make each small sacrifice in a series of sacrifices that would have the same effect of dramatically lowering your well-being level.

Comments are closed.