Skip to content

What Kind of Altruism is Most Effective?

Imagine that you have been left a large legacy, and would like to donate it to a charity, with a view to doing the most good possible.

It’s natural to think that one set of charities you should consider are those which cheaply save people’s lives, and perhaps particularly young people’s lives. For then you can count the good in the rest of those people’s lives as a good you’ve brought about.

In fact, the calculation has to be a bit more complicated than this. If you want to produce the most good overall, then you have to take future generations into account, and remember that people use resources. Let’s assume that, other things equal, future people matter as much as present people (just as people distant from us in space matter as much as people closer to us). If you save a life now, and claim that this is part of doing the most good, then this commits you to the view that the resources which will be consumed by the person you’ve saved are probably better used by them than by some future person or people. That’s a hard thing to show, especially when one takes into account that human beings are likely to use resources more efficiently in the future and that our current use of resources threatens the existence of future people.

Of course, by saving someone’s life, you will also prevent a good deal of suffering which their friends and relatives would otherwise have experienced. But grief, though bad, seems considerably less bad than certain diseases which, though they do not kill those whom they affect, cause severe and chronic suffering. If you prevent someone from catching such a disease, you don’t have to take any strong view on future generations and resources. It just seems obvious that by preventing some person, who is going to go on living anyway, from experiencing horrible suffering, you have brought about a great deal of good.

If this argument is correct, then if you want to do the most good, you should consider donating to charities which focus on non-life-threatening conditions involving severe and chronic suffering, not to those which focus on saving lives.

(Thanks to Theron Pummer for discussion. See his excellent recent post on the general topic of population ethics.)

Share on

9 Comment on this post

  1. Thank you for this thought-provoking piece. However, I don’t see why people who give to charities that focus on saving lives are to claim that they are doing the most good. What they are doing here is a supererogatory act — the type of act for which, precisely, it does not matter if they are doing the most good (it suffices that they do more good than bad). Of course I admit that there is still a question of instrumental rationality that remains — i.e. what charity ought we to give to? But they are many practical reasons independent from efficiency that one might have when deliberating on this question — i.e. is the charity trustworthy? will my money ever reach those who deserve it? are the long-term prospects on the charity promising? is the charity run appropriately? So I don’t see any ground for your “I ought to do the most good” condition to apply.

    1. It seems I read too quickly; you are not in fact endorsing the claim I am ascribing to you in my previous post. Sorry for that.

      The way I understand your point is that there is a practical conflict between saving actual lives and reducing actual or future harm; and that the friend of utilitarism + maximalism + totalism (overall utility is the sum of exactly the individual utilities) is commited to adjudicate in favour of the latter simply because the present + the future contain more people whose harm you actually can prevent or reduce, hence by totalism, more overall utility.

      I don’t think totalism is promising, but let’s grant it for the sake of the argument. Still, if that is a practical conflict, considerations about uncertainty suggest that you should restrict the scope of your assessment of the consequences to individuals of which you can be certain (or closest to certain) that they will be better off thanks to your action, namely, actual individuals (that is consistent with giving future individuals some consideration nonetheless of course; for instance, you want to save pregnant women before old men, and pregnant women with a healthy embryo before healthy women with an embryo with very few chances to make it).

      So now you need to argue that it’s better to reduce pain than to save lives as far as actual people are concerned. It might be (i.e. if many suffer deeply while few are in danger of death). But it might not (i.e. if few suffer a bit while many are in danger od death). Thus I don’t see how the argument can have the degree of generality required for your point.

  2. Interesting post!

    If this argument is true, why leave it to chance as to which people die? it would be much more effective to work out a strategy to maximise the resources for future generations and actively go out to achieve it- ie kill certain people or radically shorten everyone’s lifespans and so on depending on what the best strategy was.

    And even if it is true, it is not at all certain that future people will be more efficient- the second graph on this page suggests energy usage per capita has significantly increased over time: http://ourfiniteworld.com/2012/03/12/world-energy-consumption-since-1820-in-charts/. I guess we have more efficient technologies than before – but then we have more of them and more of us have them (ie they are cheaper) – why do we think this will not continue as a trend?

    Finally, I don’t think is not existing is bad (it’s not really anything), so future people don’t matter as much as present people. But if it did matter, then surely dying would matter (by itself I mean and not just from the grief aspects)?

  3. This is good example of the “tyranny of future generations.” If one does not discount the lives of future generations, and there will be sufficiently many of them, then our lives are virtually zero comparative value. Each of us would not be one of 7 billion, but one of a trillion, or a quintillion, depending on how well things go for humanity. On most consequentialist moralities, when viewed from this cosmic perspective, the present generation should not merely conserve but reduce its standard of living to “muzak and potatoes” for the sake of this vast army of future human beings. Life, suffering and nearly everything that matters would be consumed by this vast utility monster of the future. Probably the only thing that really matters to these future generations is technology. If future lives matter equally to present lives, present lives are virtually inconsequential – all our efforts should be devoted to radical technological advance and sufficient procreation to realises the largest utility monster possible. Or we could reject the premise that harm and benefit to future generations matters in the same as harm and benefit to the present generation …

  4. Hi Roger. Thanks for this post – I think you raise a very important consideration. I’d agree that we shouldn’t discount the well-being of future people merely in virtue of the fact that they will exist in the future. It’s a more plausible thought that we should give greater priority to the worse off; if so, we may have a better case for discounting the well-being of future people relative to ours, insofar as they’ll be considerably better off than us. However, I’m not sure we should give priority to the worse off, and I’m even less sure we should do so when dealing with future generations and merely possible persons.

  5. If we assume existential risk to apply periodically, e.g. by postulating a 20% extinction probability per 100 years, then time discounting becomes more sensible. Those who argue that the long future matters disproportionately because of the large number of lives affected must assume that such a model almost certainly does not exist, that is, that existential risk does almost certainly not apply periodically.

    I think “number of lives saved” has never been a very good proxy for ethical utility. But it doesn’t follow disease alleviation is therefore the best. Alternatives such as differential technological progress and arguably some forms of anti-cruelty advocacy might have nonlinear effects that beat it (though I think there is a considerable speculative element in this that I personally find demotivating).

  6. Thanks to all for your interesting and thought-provoking comments.

    Andrews: Quite agree about lack of generality. This is meant to be an argument about the way the world happens to be, now.

    Sarah: I was imagining someone who is deciding how to donate. They may well have moral objections to killing, and even if they’re consequentialist there are arguments against killing based on side-effects. The energy point is a good one, though one might hope that people in the future, even if they use more energy, won’t damage the environment as much as we do. Owen Schaefer pointed out to me that treatment of painful conditions may be more efficient in the future as well, so perhaps the best thing is to delay donating or even set up a trust. But it remains the case that many people want to do their good *now* (or soon, anyway). Your overall view sounds possibly person-affecting, and there are problems with such views (e.g. they seem to allow me to bring into being someone who will have a very short life of agony).

    Julian: See my first response to Sarah. I wasn’t assuming consequentialism, so the restricted principle of impartial benevolence here might sit alongside quite strong agent-centred options, permissions, or prerogatives.

    Theron: Good point about priority. That may provide a partial response to Owen’s point about trusts (see above).

    HT: Your point about discounting is right, as long as we do it purely probabilistically. This is also relevant to Owen’s point (and indeed Julian’s). And I agree there may be even better ways to produce good, so I might narrow my focus just to those who’ve already decided to donate to ‘standard’ charities.

  7. Quite agree about lack of generality. This is meant to be an argument about the way the world happens to be, now.

    Okay, but what grounds the claim that *now* people with chronic diseases are or will be worse off than people whose life is or will be directly threaten?

  8. If we recognize that future generations are us in the future, then any tyranny is being done to ourselves. Individuals do not reproduce themselves because of sexual shuffling, but humanity is reproducing itself. Hurting ourselves in the long term- and quite a bit in the short term as well-, in the belief that we are doing ourselves good overall, is what addicts do. Reproducing too much, using resources at unsustainable rates, obviously feels good to most people, and logical consequences get ignored with superstitious/mystical excuses. Things like “god will provide”, is a typical mystical excuse. “Science has found ways around limitations in the past, so it will happen again”, is a superstitious one. The latter is a superstitious belief because there is no relationship between what was found in the past and what might be found in the future.

    As far as ethics, we are also a social species, we live by teamwork, die without it. Everyone has the naked body to test this observation if desired. The need for living by teamwork is a solid basis for ethics, morals – there doesn’t seem to be any rational basis for caring about each other, having moral codes without that. Without this solid foundation, laws get put in place on the basis of social instinct, emotions like empathy. Having these reactions is important to a social creature, but instincts and emotions can be blind about rational consequences, and it is easy to do counter productive things if they are followed without rational tempering. One can have empathy about animals, for example, ignoring the logical consequences of not controlling their population because killing excess reproduction defies empathy. But blindly following empathy can mean that all die of starvation and disease, when killing the excess population could mean maintaining a small and healthy population. And help feed people, of course.
    This problem of excess reproduction is shared by humans, but of course, killing excess people is a far more difficult problem than killing excess animals. Killing sperm and eggs is generally emotionally and physically easier than killing children or adults, but even so, people have to agree to this, many don’t, and it has been easier to agree to mysticism and superstition. And agree to war, with groups formed on agreement on different forms of mystical/superstitious belief. Nothing gets solved with that. We are often given the sight of peace treaties being negotiated, but if people don’t have population and per capita resource use on the table, the whole thing is meaningless.

    But irrational beliefs and behavior itself becomes the logical control in the end- species that overpopulate and overuse things too much, have a dieoff. They may fight about what is left on the way down, but with too much damage done, no large group can win, as there is no large enough chunk of resources left to to take and live on. In the case of humans, those who count on superstition and mysticism about population and other problems, refuse to behave logically in general, can kill themselves off with these irrational beliefs and behaviors. Rational people do not need to do anything but get out of the way of this insanity. Their primary defense is flight, and the intellectual weapon of observation and logic. Irrational people react to logical observation that contradicts their beliefs with angry denial, fanaticism, and defacto suicide, fighting with even greater fanaticism with each other about which form of mystical/superstitious belief is superior. Refusing to try to force anyone to behave rationally, is a defense against being attacked- not perfect, but should often work. If we see that we need efficiency, then generally refusing to force behavior follows naturally. Forced behavior is inherently less efficient than voluntary behavior.

    The problem of “who is going to volunteer to die”, or “who do we kill”, is solved with all this. The irrational “volunteer” to kill themselves. They are already doing it. And who can argue that telling people they are behaving self destructively, and specifically how they are doing it, and give them an alternative, give them a chance to be rational, is a bad thing to do?

    I’ve written more about all this in the notes on my facebook page. Several essays there, the main one is “Principles for Society”.

Comments are closed.