Skip to content

How Important Is Population Ethics?

We face very important decisions about climate change policy, healthcare prioritization, energy consumption, and global catastrophic risks.  To what extent can the field of population ethics contribute to real-world decisions on issues like these?  This is one of the central questions being pursued by researchers in the Population Ethics: Theory and Practice project at the Future of Humanity Institute at Oxford University.  The project, overseen by Dr Hilary Greaves, officially began earlier this month, and will continue (at least in its present form) for three years.  The research team aims to make progress in theoretical population ethics, and to assess its relevance to pressing practical issues that affect future generations.

  • What is theoretical population ethics?

It’s a rigorous investigation into plausibility of competing theories about the value or moral desirability of different populations of people, where these populations may vary in terms of:

personal identity (the populations compared may contain different people)
number (the populations may be of different sizes), and,
quality of life (the people in these populations may be at different levels of quality of life, or well-being).

Limited resources make for tough decisions.  Should we spend our $X on deworming pills, or on combating climate change?  One thing we want to know, in approaching this sort of question, is how good the outcome would be if we intervened in one way rather than another.  One thing that’s relevant to the goodness of an outcome is how people in that outcome are faring – are they happy or miserable?  But which people are we to count?  One of the fundamental problems in theoretical population ethics is whether we should take into account “only” the billions of people who exist currently, or whether we should also consider the astronomically greater number of merely possible future persons.

A merely possible person is a person whose existence in the future is dependent on what we do now.  Briefly consider the story of one of them:  Linda.  If Lucia’s ovum and Leo’s sperm were to join together, a zygote would form, and at some point down the line a person, Linda, would exist and would live a happy life.  As a matter of fact, Lucia and Leo never meet, and thus Linda never exists.  Would Linda’s existence have made the world better?  This is intensely controversial within population ethics.

Defenders of person-affecting views (including Melinda Roberts and Jan Narveson) claim that we can make the world better by making people happy, but not by making happy people.  Others (including Derek Parfit and John Broome) have argued that these views imply insurmountable paradoxes.  Whether or not they do, it does seem plausible that we should at least sometimes give some weight to the well-being of merely possible persons (even if it’s less than what we give to the well-being of actual persons).  Moreover, many of the philosophers who believe that adding happy Linda to the world wouldn’t itself make the world a better place, would say that adding an utterly miserable and not worth living life to the world would make it a worse place – their view thus espouses an asymmetry about creating lives worth living versus creating lives not worth living.

Another fundamental problem in theoretical population ethics is how to balance quality of life against population size.  Many economists assume an average view, which ranks outcomes according to their average quality of life per person.  This view faces a devastating objection.  Suppose we find out that most people living in the past were much more miserable than we thought, with lives very much not worth living, making the world’s average quality of life per person very negative.  It would then follow from the average view that we’d be making the world a better place if we added many people with lives not worth living, as long as they were just above the (very negative) average.  That is absurd.  Many philosophers have instead adopted a total view, which ranks outcomes according to the total quality of life summed across all persons.  This view avoids the absurd implication noted above.  Unfortunately, it too implies claims that are hard to believe.  Most infamously, it implies the Repugnant Conclusion, which states that,

for any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living.  (Parfit 1984, p. 388)

Perhaps what’s especially surprising about theoretical population ethics is just how difficult it is to avoid the Repugnant Conclusion.  Gustaf Arrhenius, for example, has a mathematical proof showing that denying the Repugnant Conclusion forces one to give up at least one of a number of other claims, each of which appears very hard to deny.

  • The practical importance of population ethics

One of the aims of the Population Ethics: Theory and Practice project is to figure out how best to solve the theoretical puzzles generated by Parfit, Arrhenius, and others.  Another is to figure out whether and to what extent various answers to these theoretical questions are important for real-world decisions that affect future generations, in addition to the present one.  Making these decisions requires both empirical and evaluative inputs:  on the empirical side, we need to know what will (likely) happen to people, e.g., if we do not conserve natural resources; and on the evaluative side we need to know how to assess different possible distributions of well-being across (actual and merely possible) persons.  Discovering the evaluative truth about theoretical population ethics would provide only the second piece of this puzzle.  Just how important is it to uncover this piece, practically speaking?  Writing in the context of climate change, John Broome suggests that it’s very important indeed, and has recently published a book called Climate Matters; William MacAskill puts population ethics at the top of his list of high-impact philosophical research areas.  (One might class this type of research under the more general category of “prioritization research” which is itself a a high-priority cause.)

To illustrate how theoretical population ethics could matter practically, it may help to focus on some examples involving the choice between spending $X on a more “short-term” health intervention versus spending this same amount on averting a global catastrophic risk.  To simplify matters, I’ll assume that all lives are at the same high level of quality (unless specified otherwise), and I’ll assume risk-neutrality (e.g., a 100% chance of saving 1 life is as good as a 1% chance of saving 100 lives).

 Case 1 – Malaria or Nukes:  we can with virtual certainty save 100 million people from dying of malaria, or instead avert a 10% chance of a nuclear war that would kill everyone alive now, thus also preventing the existence of all future generations.

 In this case, there needn’t be any practically relevant disagreement between person-affecting, total, and average views.  It’s clear that the total view would imply we should avert the risk of war, if you just consider the astronomical amount of well-being future generations would be deprived of if the war actually occurred.  Person-affecting views agree with this recommendation, since this chance of nuclear war is worse in terms of presently-existing people (100 million deaths < 10% of 7 billion deaths).  Finally, average views which take as relevant the average quality of life per person will recognize that 10% of 7 billion very low quality lives decreases the average more than 100 million very low quality lives (these lives are very low quality in that they are dramatically cut short).  These three kinds of views will have different implications about just how bad 100 million deaths is, compared with the 10% chance of nuclear annihilation, but they will all agree that the latter is worse than the former – they’ll agree about what to do here.  So in this sort of case, population ethics might matter less, from a practical standpoint.  Now consider:

 Case 2 – Malaria or Bioterrorism:  we can with virtual certainty save 100 million people from dying of malaria, or instead avert a 1% chance of a bioterrorist attack which would kill everyone now, and thus prevent the existence of all future generations.

 Here we will find a disagreement, with person-affecting and average views recommending that we prevent the malaria deaths, and the total view recommending that we avert the 1% chance of extinction.  (I’ll leave it to the reader to work out how these views offer these conflicting recommendations.)  So Case 2 is a case in which input from population ethics is very practically important.  Consider one more example, involving quality of life (this sort of example was alluded to earlier).

 Case 3 – Worms or Climate Change:  we can with virtual certainty increase the quality of life of 100 million presently existing people by 10% (by giving them deworming pills, ridding them of their schistosomiasis), or instead, also with virtual certainty, make it the case that the quality of life enjoyed by future generations will be 10% higher than it would otherwise be, where the population size of these future generations is 100 billion (by preventing an unhealthy decrease in climate quality).

 It’s plausible that deworming has further downstream effects – since it improves education in addition to health – but for simplicity let’s ignore them.  Our very decision to adopt a policy of mitigating climate change would itself affect who exists (e.g., whether Lucia and Leo ever meet, or whether they meet at the exact time necessary to bring together the particular ovum-sperm combination that would eventually result in Linda); it’s extremely likely that the future people affected by our policy would not have existed if we hadn’t adopted this policy, the future persons that otherwise would have existed instead are different persons (see discussions of the non-identity problem).  Thus, those who would enjoy the healthier climate are merely possible persons, and would be ignored by strict person-affecting views.  In Case 3, therefore, such person-affecting views would recommend deworming, whereas both total and average views would recommend mitigating climate change.  (Again I will omit the details.)

Sorting out the complex connections between theory and practice is one of the aims of the Population Ethics project, and we accordingly hope to soon have a clearer sense of just how important it is to make progress on the theoretical, evaluative side, and in what particular practical contexts.  (I would warmly welcome any comments that spell out bridges between theory and real-world practice, concerning population size, quality, etc.)

If it turns out – as seems to me likely – that solving the theoretical problems in population ethics is both very practically important and extremely difficult, then we will have to work out how to proceed in light of moral uncertainty.  This topic also falls within the scope of the project.

Share on

13 Comment on this post

  1. Thank you for the interesting post. Out of curiosity, what are the paradoxes that the view you call “person-affecting” supposedly faces? And isn’t the distinction between “making happy people” and “making happy people” merely verbal once it is acknowledged by both parties that the notion of happiness that matters for ethics is not hedonist in the sense that it is not reducible to the occurrence of pleasure states, but rather depends on whether people succeed, through their voluntary actions, at doing good things?

    1. Thanks Andrews. Sure, I’ll quickly spell out one of the purported paradoxes. Many defenders of the person-affecting view have the intuition that adding people to the world is “neutral” in the sense that it doesn’t make the world any better or any worse (even if these added people would have lives well worth living). John Broome calls this the “intuition of neutrality.” A natural interpretation is that it’s an intuition of *equal goodness*. That is, if you take world A (containing just Juan at well-being level 7), and then “add” Sherry to arrive at world B (containing Juan at 7 and Sherry at 5), these two worlds are equally valuable. That is what this simple version of the person-affecting view would say. A paradox for this view pops up when you consider world C (containing Juan at 7 and Sherry at 4). Now, it seems very clear that B is better than C, since C is exactly the same as B except one person (Sherry) is worse off. The simple version of the person-affecting view, with its natural interpretation of the intuition of neutrality, implies that A and B are equally good, and that A and C are equally good. These claims together imply that B and C are equally good. But we just said it’s very clear that this is not so. We could escape from this particular paradox by offering a different interpretation of the intuition of neutrality, e.g., we could say that A is neither better nor worse than B but *not say* that they are equally good. Perhaps instead they are *incommensurable* in value. Broome argues that versions of the person-affecting view that take this escape route face further paradoxes. He argues this in *Weighing Lives* (chapters 10-12), as well as in a short paper called “Should we value population?” in *Journal of Political Philosophy* (2005).

      I don’t think that rejecting a hedonistic conception of well-being or happiness makes the distinction between “making happy people” and “making people happy” merely verbal – though I think I see what you’re getting at. If it’s true that my well-being is tied to my doing and succeeding, then you can’t simply “make” me well off without my active cooperation. But you can make it the case that I very likely will succeed, e.g., if you provide me with the materials I need to do so and you have every expectation that if only I had those materials I would succeed. Person-affecting theorists could recast their distinction in the non-hedonistic language “making well off people” versus “making people well off” (this language is neutral about what does make someone well off or have a life worth living, etc.).

  2. Hello Theron, thank you for your detailed reply and for the references, which I’ve read. Now I don’t think that the “equal goodness” really captures the intuition of the firends of “person-affecting” views. More generally, those views seem to imply, and are probably aimed at explaining, the Asymmetry you’ve mentioned at the outset of your blog post. Since the Asymmetry in turn captures deep moral intuitions, it seems that the “person-effecting” views have the edge over any dialectical opponent that cannot explain the Asymmetry or the appearances of an asymmetry, don’t you think?

    Concerning the “making Xs happy” and “making happy Xs”, or any version of the distinction substituting any evaluative term for “happy”, I am still unsure. The distinction seems to have been introduced as a pun aimed at those who reject the “person-affecting views”, not as a technical, explanatory important distinction. For all parties in the debate can agree that the well-being (or any evaluative surrogate) of people has moral significance, and yet disagree over whether the moral significance of the well-being of a merely possible individual should be treated equally as the moral significance of the well-being of a not-merely possible individual (I take it that intuitions about the Asymmetry seem to support the former answe). If so, “making Xs happy” and “making happy Xs” just looks like a convoluted way of distinguishing between different types of moral significances under some moral choice.

    1. Thanks Andrews, for thinking about my reply and checking out those references. A few quick responses.

      First, I would agree that some deep intuitions together constitute at least an appearance of an asymmetry. If they are evidence of *the Asymmetry*, and if some person-affecting view turns out to be the best way of capturing the Asymmetry, then this person-affecting view enjoys a theoretical advantage that rival views lack.

      But it might be that person-affecting views suffer theoretical disadvantages that rival views lack. The paradox I cited above is such a disadvantage, for one particular type of person-affecting view. However, you’re right that friends of person-affecting views can reject the claim that worlds A and B are equally good (where A is just Juan at 7, and B is Juan at 7 and Sherry at 5). Nonetheless, presumably they have to say *something* about A and B. They could say that A and B are incommensurable, but then Broome argues in the materials I cited that that interpretation of the intuition of neutrality is also doomed. The task is to avoid the problems Broome spells out while also capturing this intuition satisfactorily; that may be hard to do. Then we’d have to balance the disadvantages of person-affecting views against its advantages (one possible advantage being, capturing the Asymmetry).

      Just to throw in another jab at the intuition of neutrality: In addition to worlds A, and B, now consider world D (Juan at 7 and Jim at 1). Suppose we’ve got A, and are now forced to move from A to B or from A to D. Doesn’t the intuition of neutrality imply that moving from A to B is (in some sense to be made more precise) “neutral” and that moving from A to D is also (in that same sense) “neutral”? And wouldn’t this in turn imply that there’s nothing to be said in favor of B over D? But doesn’t it seem that we should pick B?! (I’m not saying this is decisive, just trying to help illustrate how finding a plausible formulation of the person-affecting view may not be straightforward, and may be pretty hard.)

      For classic puzzles about person-affecting views and the Asymmetry, I’d recommend part four of Derek Parfit’s *Reasons and Persons* (especially chapters 16 and 18, on the Non-Identity Problem and the Absurd Conclusion, respectively). You might also check out chapter 4 of Nick Beckstead’s PhD dissertation, available here: I think he offers a very clear and efficient discussion of these issues.

      (A more minor point: It’s true that person-affecting views can be appealed to in an attempt to explain the Asymmetry, but I wouldn’t say that all person-affecting views imply the Asymmetry. One could view creating miserable and happy lives symmetrically, while attaching no or less moral weight to the well-being of merely possible persons.)

      On the “making X happy” versus “making happy X” issue, I personally think this phrase offers a snappy way of hinting at an important underlying intuition. You’re right that one could hold the view the moral significance of the well-being of a merely possible person is *less* than that of an actual person, and claim that we have reason both to make people happy *and* to make happy people. But then defenders of this view would say that the reasons to do the latter are weaker (though if we could we should do both).

  3. Thanks for your considerate reply, Theron! I will definitely read your extra bibliographical entries. Meanwhile here is just a quick reply/question/objection.

    If I’ve understood correctly, if I am a friend of person-affecting views, it is not enough for me to stick to the claim that individuals which at some point exist at the actual world have a moral significance such that it necessarily trumps the moral significance of individuals which at no point exist at the actual world (this is just one example; I take this view to be Roberts’ version of a person-affecting view). In addition I have to characterize my view by laying down principles for ranking worlds as a function of the distribution of some morally relevant property at these worlds?

    I am not sure whether this is a fair request. Fairer seems to be the request of providing principles for ranking worlds, but for assessing actions whose consequences those worlds model — irrespective of the complete distribution of cardinalities of the morally relevant evaluative property. But then, it seems to me, the view I’ve mentioned as an example can fill the bill provided it is accompanied with some plausible account of how well-being (or any evaluative placeholder) connects with (moral) right.

    One such account might go roughly like this: for any individual i and some evaluative scale s such that s(i) is the value of i’s life, it’s morally wrong for any agent to make it the case that s(i)=t. (“@” is supposed to restrict the evaluation of worlds to the moral significance of actually existing individuals so as to be consistent with the person-affecting intuition).

    So considering your puzzles specifically, what a proponent of the pair of views I am assuming here would like to know, it seems to me, is how you fill the details about the moves @A, @B and @C with respect the threshold condition. For instance, if any world satisfies the threshold condition, viz. each move is such that s(i)>=t at @A…@B…@C, we have no ranking. But for any two worlds that differ with respect to the “threshold” condition, the world that satisfies the condition is preferable to the other. So there is no complete ranking with respect to the complete cardinalities of values that make up the moral significance of individuals at some worlds, but there is a partial ranking that is consistent with our moral intuitions and that is not hostage of a claim of equal incommensurability (there is no incommensurability if there is no need to measure everything against anything) or of a claim of equal goodness (though equality of goodness of worlds above the threshold trivially follows).

    1. Hi Andrews, thanks for these additional thoughts. I suppose one *could* require that person-affecting views offer complete rankings, assigning a “moral value” number to each possible world. But I’d agree with you that that might not be a fair request. However, I’m not sure that jabs at person-affecting views like the one I made previously – involving worlds A, B, and D – should be interpreted as imposing this sort of strict requirement. Whether or not we can assign numbers to all possible worlds, it does seem plausible that we should choose B over D. A worry for some person-affecting views is that they don’t capture this intuition. (If you don’t share this particular intuition about B and D, I can turn up the heat: imagine starting with a world containing no one; now, you can either add just one person with a life barely worth living, or 10 billion other people, with extremely high quality lives. It seems to me our population ethics should say the second option is better, or the one we should choose.)

      1. Thank you for turning up the heat! I agree with you that we should prefer B to D, or, for that matter, any world with n lives all worth living to any world with m lives not all worth living. The point of introducing a threshold condition was to be faithful to this very intuition without running against the issues you’ve raised in your post and subsequent replies.

        An implication of course is that this condition will not distinguish between, say, two worlds that differ as much as is conceivable except for the fact that no individual at any of them is under some threshold of well-being. For instance, actualizing a world with 1 life with a well-being j equal or superior to the threshold will not be counted as worse (or better) than actualizing a world with 10^10 lives with individual well-beings k>=j, and thus actualizing any of them will be permissible.

        But I think this is an acceptable result. After all, worlds are just props for modelling consequences of our actions at the actual world. So if it is true at the actual world that an action is right only if it does not bring people under some threshold of well-being if possible, we’d expect this claim to be mirrored in our transworld comparisons when discussing population ethics. In this case, it’s all but surprising that we could ground pairwise comparisons between worlds on this claim.

        Of course, this claim about the existence of a threshold for assessing action whose consequences affect the well-being of individuals might be false.But in this case we’d expect there being arguments independent of population ethics to disprove it.

    2. Oups, an ugly typo slipped through; I wanted to write “it’s morally wrong for any agent to make it the case that s(i)<t”.

      1. Thanks Andrews. No worries, I think I knew what you wanted to write there! Right, you can set the threshold high enough to avoid the problems I had mentioned. The second paragraph of the 2:09am entry raises a significant problem, and I didn’t follow why you think the result mentioned is acceptable. Let’s say the actual world is like this: everyone is going to die at 11:59pm tonight, at 12:00am tomorrow morning, one of two possible populations will be actualized: (a) one person with well-being just barely above your threshold (whatever it is), or (b) 100 trillion people with arbitrarily higher momentary well-being than this one person, who will each live forever. We can choose now to pick (a) or (b). You might think it odd if your view says that picking (a) is permissible.

  4. Under the pair of views I was considering (I am now equipped with the proper terminology thanks to Nick Beckstead: strict, asymetric person-affecting view + threshold condition on individual well-beings), it seems to me we cannot make transworld comparisons without specifiying a particular action whose possible consequences those worlds are stipulated to model. For if we *actualize* any such world, this means we have to *do something for it to be the case* that, under some action, this or that world is actualized. In particular, under the two hypothesized views, we need to know who is a mere possible individual and who is an actual individual.

    So what action do you have in mind when you contemplate moves @a and @b? Who’s actual and who’s not?

    1. Thanks – you’re a fast reader! What I had in mind is that you can bring about either (a) or (b) (not both), and that the one person in (a) is identical to no one in (b). (And that no one in either population is identical to anyone alive now.) This means that everyone in (a), and everyone in (b), is a merely possible person.

      1. Then I guess the two views I were assuming imply that none is preferrable/obligatory/better. By strict present-affecting-ism with asymmetry, mere possible persons do not count unless we would make them unhappy if we were to actualize them. By the threshold condition, it is impermissible to make unhappy people if their unhappiness entails a well-being below the threshold. So here he threshold condition is vacuously satisfied by both (a) and (b). So both @a and @b are permissible.

        Do I have an positive argument fo the two assumed view? Nope. So even if there is no issue here, I still need to work out in more details the theoretical advantages and drawbacks of the two views individually and collectively. I’ll come back when I have something to say about this.

        1. Sounds good, Andrews. I myself find the conclusion that both @a and @b are permissible to be counterintuitive, but maybe others don’t – at any rate I don’t think this is the strongest objection to the sort of person-affecting view you outlined (again I’d recommend Parfit, Broome, and Beckstead, among others, for more/better objections).

Comments are closed.