Skip to content

People and charitable causes are importantly different things

Like Prot – the lovable character played by Kevin Spacey in the underrated movie K-PAX – you’re an intelligent benevolent extraterrestrial who has just been beamed to Earth.  Sadly, unlike Prot, you have no return ticket.  The good news for you is that just moments after hopping off of your beam of light, you found a briefcase stuffed with $3 million.  Being benevolent, and having concern for the inhabitants of Earth, you decide to give nearly all of this money to charity.  Being completely new to the planet, however, you do not yet have any special concern for anyone here – no friends, no loved ones.  Having this equal concern for everyone, you want simply to do the most good possible, and so you decide to give this money to the most cost-effective charities you can find.

Exit science fiction scenario.

One important difference between each of us and this Prot-agonist is that we do have friends and loved ones; we have rich shared histories with them, we care deeply about them, and, crucially, the level of concern we have for them is not on a par with the general concern we have for strangers.  If your fiancé were drowning in a lake to your north, and ten strangers were drowning in a lake to your south, and you could either rescue the one to your north or instead the ten to your south (but not all eleven!), you’d probably head north.  Whether this constitutes morally good behavior on your part is a matter of controversy among contemporary ethical theorists.  But let’s assume the commonsense view that it’s not wrong of you to save your fiancé over the ten others.  This degree of special partial concern is, we’ll suppose, justified.

Tragedy strikes:  after several long months of battling, your fiancé dies of cancer.  You are absolutely devastated.  But, quite admirably, you remain the kindhearted person you always were, and – inspired by some recent social media spotlights on philanthropy and fundraising – you decide to give a substantial portion of your income to charity.  You could focus your donations on a wide variety of causes:  heart disease, AIDS, cancer, tuberculosis, malaria, schistosomiasis, and so on.  Having recently lost someone to cancer, and owing to our tendency to “take action and give for deeply personal reasons,” you browse the internet for the best cancer charities you can find, and eventually settle on one.  Before pressing “enter” on its payment page, you have another thought:  Hold on, what exactly is the connection between my fiancé and this particular cause?

It occurs to you that your fiancé is not among the beneficiaries of the charity (and not only because they’re no longer living).  Indeed, those you’d be helping, whether your money went towards fighting AIDS, cancer, or malaria, would all be complete strangers to you.  We granted that the special concern you had for your fiancé would justify you in placing significantly more importance on their life and well-being than that of strangers, but would this same special concern for your (now deceased) fiancé justify you in placing significantly more importance on the lives and well-being of strangers afflicted with cancer than that of strangers afflicted with heart disease, AIDS, tuberculosis, malaria, or schistosomiasis?  Suppose you could prevent one stranger from dying of cancer, or instead prevent ten – or one hundred – from dying of malaria.  Shouldn’t you help the greater number in this case?  The bond between you and the stranger dying of cancer doesn’t seem sufficiently strong to outweigh something as morally heavy as ninety-nine lives.  And this is no outlandish scenario:  charities combating malaria are likely to be hundreds of times more cost-effective than most other charities.  (Though focusing on particular cancer treatments in the developing world can also be highly cost-effective.)

You continue to ruminate:  fighting cancer is a meaningful way of honoring my late fiancé, and this is what justifies me in especially favoring cancer charities.  I am not sure this will cut the moral mustard.  This latest thought questionably presupposes that the relevant group to which your fiancé belongs is “people with cancer” rather than “people who have suffered from a crippling illness” or simply “people who have suffered.”  Without some such questionable presupposition, it would be at least as plausible to claim that reducing unnecessary death and suffering appropriately honors, and sincerely expresses your love for, your late fiancé (and you should then in turn be open to the possibility that you could do much more toward realizing this italicized aim by giving to a malaria charity than a cancer one).  On reflection, it often seems bizarre to invest much special emotional, personal, or moral significance in the very particular way in which someone is harmed.  Holding constant the amount of harm done, does it really matter whether the cause is cancer, malaria, or oncoming traffic?

If you do give to a malaria charity as a symbolic expression of concern for your loved one, simply as someone who has suffered, it may not be immediately obvious to those around you that this is part of your reason for giving – whereas it might be if you gave to a cancer charity (given that someone close to you was a victim of cancer).  But this shouldn’t dictate where you donate, and anyway you can always explain yourself!

Special concern for your fiancé may justify your prioritizing them over others, but it does not entail a special reason or justification for you to focus your donations on cancer charities.  At least, if there is a connection between these two types of special partiality, it is not obvious what it is.  Other authors have also cautioned against projecting moral properties of persons onto causes:  the plausible claim that all people are equally morally important is not be confused with the very implausible claim that all causes are equally morally important.  Moreover, it is not the case that being fair to all people translates into giving equally to all charities, or determining where to give on the basis of a coin toss (or several).  The general lesson:  people and charitable causes are importantly different things.  If we can remember this at the right times, we’ll be better able to give using our heads, in addition to our hearts.

Share on

19 Comment on this post

  1. Interesting article. I give to a cancer charity because cancer has killed people I know and love. Likewise heart disease. (These are what I think of when I hear the flippant phrase “first world problems”. And strokes, of course, and degenerative diseases.)

    But I guess I’d reject the bit which you characterise as “the plausible claim that all people are equally morally important”. It’s a view from nowhere. It maps to no one’s experience. Importance – issues of value more generally – don’t just hang in the air independent of people. Things can only be important *to someone* (or to some group). And as soon as you lock the act of valuing something to an agent, then you import that agent’s perspective, which – though these perspectives vary a lot – never includes the view you cite as being “plausible”. The idea that all people are morally equally important strikes me as being a lot like some of the abstractions of economics, which are currently being refined, revised and sometimes over-turned by advances in behavioural science: just as the dictator or ultimatum games show that people fail to maximise their expected utility, I expect there are some obvious experiments that could show that no one actually lives (or intends to live, or admires living) according to “all people are equally morally important.” It smells like the same sort of stuff, to me.

    (The view that within some group everyone is equally morally important may be a useful abstraction for that group. It may be a useful way for a Treasury or justice system to think – to first order* – about the citizens they serve. But maybe this is because institutions are capable of holding funny beliefs that no actual people would hold.)

    *Note that they usually give special consideration/moral weight to subgroups within the citizenry: the poor in the case of Treasuries; afflicted minorities in the case of justice systems. So even in these cases the weights aren’t uniform.

  2. Thanks for your comments, Dave. I am very sorry to hear cancer has killed people you love. Without warning it can destroy those very near to us, and it hurts so much when this happens (unfortunately I have first-hand experience with this, too). I should take this opportunity to emphasize that the main point I’m making in the article does not in fact depend on the claim that all people are equally morally important (whether that’s interpreted as “important, *period*” or “important *to* particular agents”). I am granting that some degree of special partial concern and treatment for loved ones is justified. My main point here is that this partiality for particular *people* wouldn’t clearly translate into a reason or justification for favoring particular *charitable causes* (e.g., those fighting the illnesses that those near and dear to the agent have suffered from). A more tangential remark: even if no one acted as if all people are equally morally important, that wouldn’t show that it’s false that all people are equally morally important. I’d distinguish between “is” (what we actually do) and “ought” (what, ideally, we should do), and locate the view that all people are equally morally important on the “ought” side. This isn’t to argue for this view, just to point out that I don’t think it could be refuted by an experiment showing that people aren’t living up to it.

    1. Thanks – I very much agree with the point that the partiality one may feel for people doesn’t necessarily map to a cause.

      On your tangential remark – yes, I agree that “even if no one acted as if all people are equally morally important, that wouldn’t show that it’s false that all people are equally morally important.” You can interpret the outcome of the ultimatum/dictator games as showing that people are irrational. Or you could interpret the result as telling us that our theories of rationality are inadequate (too parsimonious, perhaps, and empirically inaccurate). (I favour the latter.) Either interpretation is consistent with the data. Likewise, you might say that the outcome of my hypothetical game shows that people fall short of what they ought to do. My interpretation would be that any moral theory which implies that all people are equally morally important is inadequate (too parsimonious, empirically inadequate).

      [There’s ways in which the sort of Singer-ite thinking I encountered among the “giving what we can” folks in Oxford reminded me, more than anything else, of ultra-dry Friedmanite guys I used to work with at the NZ Treasury. In both cases, when theory and observation run into each other on various blind bends, they seem seriously to believe that it is because actually observed people are driving on the wrong side of the road!]

  3. Thanks Dave. In claiming these theories (of rationality and morality) are “too parsimonious, empirically inadequate” I take it your worry is that they don’t correctly describe agents’ actual behavior. (Or is that not your worry?) I was thinking that those putting forth these theories would say they’re not trying to describe agents’ actual behavior; instead, they’re offering something normative, or prescriptive — they’re saying what agents rationally or morally should do. One might argue that what we should do is constrained in some way by what we *can* do. Whether we’re able to live up to the impartial view that everyone’s well-being matters equally, I do think we can live up to the standards outlined in the Giving What We Can pledge (their 630 members are proof that we can do it!). The group isn’t Giving What We *Can’t*!

    1. Thanks Theron – I think my objections are multiple. One (1) is that the theories are quite poor at describing the observations, so you could call that an objection about empirical adequacy. Another (2) is that they are too simple to contain all the bits that (I think) a theory of rationality or morality might (in my world that would be an objection about model structure). And a third is that because they’re quite clunky (see (1) and (2)) I would not expect them to be very convincing in any prescriptive or normative role. And (unsurprisingly) I find them of restricted value in those roles.

      You call the “all morally equal”(can I call it AME??) “the impartial view”. I disagree. It’s an abstract view, but that is not the same thing. It might be called “uniform”, perhaps, but not impartial. (actually I don’t see how any assignment of moral weights can be considered impartial.) Here’s a simple model. There are two communities, X and Y, which each have n people. Each member has a quantity Q of moral consideration to give. A is member of X, not Y. X and Y are pretty closed systems, so there isn’t much investment between them. So the average member of X invests Q/n moral consideration. Happily, because it’s closed(ish) they get Q/n back, so the moral consideration budget balances. But A – who was raised in X but doesn’t think this matters – invests Q/2n in X, and Q/2n in Y. From the perspectives of other Xers, A is short-changing them. From the perspective of Yers, A’s investment is a windfall gain. I don’t see why either would regard A’s behaviour as being more “impartial”, as though it were somehow more obvious or neutral than playing by the rules of moral consideration that everyone else adheres to.

      [Actually, I think the perception that A is short-changing other Xers in this way is one of the factors we see behind the growth of populist political resentment towards cosmopolitan elites in Europe. Might be a fairly long bow that blames Toby Ord for Nigel Farage… but possibly a good starter for ten after a couple of pints at the Royal Oak.]

      1. Sorry – should be invests Q/2n in each member of X, Y. And for simplicity I’m assuming equal per capita moral consideration within the community, even though some sort of n-shaped distribution centred on A would be more realistic. Whatever.]

  4. Hi Dave, thanks for this. There are important objections to the principle that all people are equally morally important (note the link in the second paragraph of the article), but again I am skeptical of the “empirical adequacy” sort of objection, insofar as it amounts to claiming the principle fails to describe actual behavior (isn’t there an is-ought fallacy lurking here?). I fear that the example involving giving/investing moral consideration, though interesting, doesn’t really make contact with the principle I have in mind. It’s not about what consideration anyone in fact *assigns* to anyone, but about the moral importance or value that people *have* independently. At any rate, since this particular principle isn’t very central to my article (the latest comments all stem from my “tangential” remark!), would you perhaps want to take further discussion of it to email?

    1. I don’t think I have an is-ought problem on this one – I reject the usefulness of what I called AME as a norm (the ought side), and I think it’s clearly a million miles from being a useful description of how people behave, too (on the is side). But while accepting the usual Humean point on the divide between the two, I do think it’s useful if there’s at least some meaningful precedent for norms. If a norm has no precedent in any observed setting/society, then I’m pretty sceptical about its ability to be a useful norm. AME has no meaningful precedent, and it seems to me more a quick and dirty algebraic simplification than a serious suggestion for guiding human action. But as you say, this stuff wasn’t central to your post, so I’m happy to take it to email, too.

      1. Thanks Dave. While you are *distinguishing* between “is” and “ought” it still appears your second criticism (“not a good description”) is an argument of the following form: a particular “is” statement is true, therefore a particular “ought” statement (AME) is false. That’s the is-ought fallacy. About your first criticism (“not useful, not action-guiding”), I’d claim that the usefulness of a principle is importantly different from its truth.

        1. I accept that the fact that something is observed is no guide to whether it is morally right. But determining moral *truth* is above my pay grade (it’s something for undergraduates, if moral certainty is any guide to moral truth).

          I think in terms of usefulness, for which precedent matters, since “ought” implies “can”, and claims about whether or not people “can” live according to some rule might be more readily believed when you have some observations to to show at least a couple of viable social precedents among the 100 billion people who have lived over the past 50k years (say). Communism failed because “from each according to his ability to each according to his need” turned out not to be viable (obvious incentive problems). AME will fail because it is a form of free-riding.

          I’m sure both “from each according to his ability to each according to his need” and AME, Giving What We Can-style, may be morally true (by cohering with other beliefs you might have) subject to holding a bunch of other beliefs, too. Like I say, I’m happy to accept that the truth of these beliefs is not determined by facts about the world. But usefulness is.

  5. Thanks, Theron. Great post and I’m persuaded. But there is a Sidgwickian question about publicity here. Most of those who donate to charities do so because of some non-reason-giving causal connection they have with the charity in question. There’s a danger that persuading these people they have no reason to give to these charities might lead them to give to no charity at all. As Williams said of Singer: ‘As moral persuasion, this kind of tactic is likely to be counterproductive and to lead to a defensive and resentful contraction of concern.’ (This is in a fn. on p. 212 of *Ethics and the Limits* and he cites research by Fishkin in support. Perhaps there’s more and better evidence available now.)

  6. Hi Roger, thanks for the kind remarks. This is an important question, and I agree that there is some risk here. My sense is that there are people in the Effective Altruism community who might be particularly aware of the relevant empirical considerations, and so I hope to explore your publicity question with them. A few quick points, from my armchair. There *might* be a disanalogy between the Singer message “you are morally obligated to give most of your money to effective charities” and my message “your personal connection doesn’t give you a special reason to give to (e.g.) cancer charities.” In the latter case, presumably there’s already the strong motivation to give on personal grounds, and it seems less likely that hearing my message would dislodge that (rather than stop giving to cancer charities, people are more likely to tell me to buzz off and continue with business as usual). Also, one thought to consider is that even if my message causes 90% of readers to switch to not giving at all, and 10% to switch from cancer charities to malaria ones, that might be a decent tradeoff insofar as malaria charities are hundreds of times more cost-effective. (Just to publicly clarify, I do think people have *some* reason to give to cancer charities, but I think there’s *more* reason to give to malaria ones, and that special personal connections don’t *amplify* the reasons for giving to particular charitable causes such as cancer.)

  7. My potentially ultra-faulty intuition is that the most dangerous thing re: the ethics of charity is that if people see ANY discussion/debate about giving, ESPECIALLY efficient charitable giving, it reinforces the likely pre-existing prejudice of “don’t give because it doesn’t help and/or we don’t know if the money is being used wisely/with propriety.” Irrelevant of whether a careful reading of the discussion would reveal this to be false, some people might just interpret such discussions as “noise” and chalk it up to that sentiment.

    I think this because that sentiment is what most people WANT to believe, as so:

    1 – Most people do not give, or at least give “enough” to satisfy some ruthlessly utilitarian evaluator of giving.

    2 – These same most people do not want to be morally deficient.

    3 – People in general prefer to alter their beliefs/biases to fit their behavior rather than vice versa.

    4 – The said people are likely to look for any evidence, no matter how frail, to justify their insufficient giving.

    Thus, anything from the philosophical community other than “the world’s leading ethicists all agree that you should give this way” is likely to be taken, at least by some (most?) people, as armor to protect them from the moral demand to give. Or at least that’s my snap intuition.

  8. Should pain and suffering be taken into account? Is it better to help more people that suffer less, or less people that suffer more?

  9. Caley, thanks for this interesting hunch about how people react to hearing disagreement about the ethics and effectiveness of giving. With you, I’m really not sure what proportion of people do react this way, but perhaps there are better and worse ways of presenting discussions about giving so as to avoid these bad effects. One thought is that if the “giving experts” (e.g., GiveWell, Giving What We Can, etc.) all agree that some class of charities are quite good, then disagreements about relative goodness within that class might not reinforce the futility prejudice you mentioned. Fortunately, we might not have to rely on our hunches for too much longer, as the good folks at Academics Stand Against Poverty are doing serious empirical work on these issues:

    Jj, thanks for your question. It does seem that the amount of suffering relieved per person is relevant. But since the number of people we help also seems morally relevant, the answer to your question “Is it better to help more people that suffer less, or less people that suffer more?” is going to depend on both the amount of suffering per person, and the number of people in question.

    1. @Dr. Pummer – I think that’s well-stated. I think my point was primarily that I don’t think we need to worry about people who are INCLINED to give withdrawing their giving due to critiques about how they do it. I suspect that you are right and that such people are more likely to just keep giving ‘inefficiently,’ and that in any case the potential benefit of convincing even a small number of people to give efficiently outweighs this risk. My primary worry, when it comes to charity, is getting to the people who are NOT inclined to give, and the above post was just the concern relevant to those people that seemed most similar to Prof. Crisp’s point.

  10. Thanks for a wonderful post, Theron. You certainly convinced me. However, I think your point is only valid if the person who is about to donate money wants her donation to be as effective as possible, to alleviate as much suffering as possible (in name of her loved one, or as a way of honoring someone). But it seems to me that that is not what people who donate to causes related to their loved one’s suffering have in mind. It can be argued that effectiveness should be their priority, of course, but as a matter of fact it may not be. Many times people wish they could have done more for their loved one, that they could have saved him/her. Given that “failure”, helping someone else who suffers from the same ailment is something like redeeming oneself, righting a wrong. Intuitively, it seems psychologically important that the cause be a similar one. The motivation that a parent who has lost a son can have to help a boy who suffers from the same malady will likely be stronger than the motivation he or she might have to help, say, diamond miners somewhere else in the world. It’s something like transferring the partiality you had for your loved one to a new person who is similar in relevant ways. It is a way of filling the void left by the loved one, a way of feeling accompanied by the person who died through remembering him/her. If this is right, then donating to a cause related to a loved one’s suffering may not be primarily an act of beneficence, but a therapeutic act.

    Another element to think about is the desires/beliefs of the person who died. Suppose a family member, F, dies from disease x. F was passionately involved in an organization that is joining efforts and resources in order to find a cure for x , but he never showed any interest for contributing to finding a cure for y. If I want to donate money to honor him, it would be strange to donate to the organization that focuses on y (even if it is a more effective organization). It seams reasonable to think that if my primary objective is to honor F, then I should give to the organization he favored.

    Final point: many times, people who have closely accompanied the suffering of someone become experts in that particular disease or situation, and can do much to help a cause other than donating money. I know some people who have become involve in different NGOs, and have come up with ideas as to how to help people based on their experience with their loved one. It is valuable experience, a and putting it to good use may make them feel like, even though their loved one died, his/her suffering was not in vain. And making such a strong commitment to a cause may lead people to donate to that particular cause in order to hold some kind of consistency.

    1. Hi Carissa, many thanks for these thoughtful and helpful reactions to my post; your comments have moved me to say more (maybe to say too much…!).

      First, while many people start out donating as a largely therapeutic act, occasionally this practice can transition into something that’s guided by considerations of doing good in more general terms. It’s an empirical question just how common these transitions are, but I do know of a few people working on answering it. For example, I’ve come across this sort of survey of effective altruists: A lot of them start out as (above average) “altruistic” before they incorporate the “effective” component into their giving. Some of them become (above average) “altruistic” because of various personal circumstances, including things that have happened to those near and dear to them. I hope arguments like mine will be practically relevant to at least some people who don’t already have in mind the goal of doing as much good with their donations as possible; it would be wonderful if such arguments not only helped steer decisions, but, perhaps with some useful psychological tricks, enabled people to derive the personal therapeutic benefits you cited from donating to the most effective charities.

      Let me take this opportunity to elaborate on something I should have made clearer in the post itself: for all I said in my post, there are agent-centered permissions not to maximize goodness (within deontological constraints). For instance, perhaps you’re required to use 10% of your lifetime earnings toward promoting good in the world, but you’re permitted to spend the remaining 90% in whatever way you want (again, within deontological constraints). We can call these categories your “beneficence spending” and your “egoistic spending,” respectively. For all I said, one could permissibly use their egoistic spending on charities for personal, therapeutic reasons. My negative point that partiality to people doesn’t imply partiality to particular causes is, I think, compatible with this sort of position.

      On your second point, good point. If a family member F dies of disease X, *and has a strong desire that X be eliminated*, then there is a reasonable case that can be made for some special favoring of charities that fight X. Since my blog post simply assumed that one’s special partial concern for F is justified, let’s work from there. I had assumed that this justified partial concern would permit me to place greater weight on the interests of F, over the like interests of strangers. It’s not crazy to think it also gives me some special reason to perform acts that *honor* F. One claim I made earlier is that, the mere fact that F suffered from X doesn’t mean that I’d be honoring F if I gave to a charity fighting X. However, you’re suggesting that it might instead be *the fact that F desired that X be eliminated* that would imply I’d be honoring F if I gave to a charity fighting X. I have to agree, that sounds like a significantly more plausible suggestion. I think that this is indeed one way that partiality to people might plausibly entail partiality to particular causes. However, two things: I think my main target in the post are views that make the connection between partiality to people and causes more immediate – that is, without the intermediate step appealing to the fact that the people to whom one is partial desire that you give to some cause. Second, assuming it’s true that F’s desires give me some special reason to give to charities fighting X, I am skeptical they will give me *enough* reason to make it permissible for me to give to charities fighting X rather than charities that are orders of magnitude more effective in terms of promoting good generally – at least, if we’re discussing how to allocate one’s donation money within the realm of what I earlier called “beneficence spending.” While it is a defensible, not crazy, view that special partiality to one’s partner would permit one to *save their partner’s life* over the lives of five strangers, I am doubtful that it would permit one to *honor their partner’s wish that those dear to them contribute to charities fighting disease X* at the expense of five strangers dying. (Also note that the difference of five lives is much less than the difference in effectiveness between average charities and the most effective ones.)

      I found your third point especially thought-provoking. Often people become experts on particular diseases because of what these diseases do to those near and dear to them. In virtue of being an expert on some disease X, one might acquire some degree of special power and responsibility to do good by fighting X. This special responsibility, especially when taken together with the agent-centered therapeutic value of fighting X, may make it permissible for one to focus one’s time on X rather than other diseases or causes. More generally, one may be permitted to become *committed to fighting X*. Then a further question, which one of your comments invites, is how much of one’s money one is permitted to donate to fighting X in order to maintain this permissible commitment (your point about consistency). If donating to fighting X were, from the point of view of doing good generally, optimally cost-effective, then it seems permissible for one to donate all of one’s money allotted to charity to the best X-fighting charity. But, to make things more interesting, let’s assume that this is not true of fighting X; it’s not best from the standpoint of general good. Given all this, how much am I permitted to give to fighting X? That is a hard question, but one reasonable thing to offer is this: it’s *impermissible* to give more money towards fighting X than would be necessary to maintain one’s permissible commitment to fighting X, when it’s possible to use one’s remaining money on more effective causes or charities. (And maybe the amount that’s necessary for commitment maintenance is not very large, again relative to one’s total money allotted to charity.)

  11. Hi Theron, thank you for your response. I find all your replies to be quite satisfying. I do think that this argument can encourage people to give more effectively. It can give people a meaningful criteria with which to replace the idea of donating to a related cause. Maybe sometimes people donate to a related cause simply because it’s very hard to choose among causes, and that is a clear way to make up one’s mind.

    I completely agree with 10-90 proposal (although we can discuss which is the right proportion, of course). Donating to a cause that is not the most effective for sentimental reasons can be thought of as something like a personal expense.

    I am less convinced about what ought one to do if one wants to honor F. We can imagine even more problematic situations. What if the money we are spending is money that we inherited from F? What if F put a request in his will that we donate m amount of money to fighting disease X? I feel ambivalent about such requests, promises made to the dead, and using inherited money. Effectiveness carries a lot of weight, and you may be right about it being more important than F’s preferences, but it is still not entirely clear to me.

    Thanks for the good discussion!

Comments are closed.