Skip to content

Adding Happy People

Almost every week there’s a headline about our planet’s population explosion.  For instance Indian officials confirmed recently that India is projected to overtake China in just over a decade – to become the most populous country on Earth.  Many are worried that the planet is becoming increasingly overpopulated.  Whether it is overpopulated, underpopulated, or appropriately populated is a challenging ethical question.

Let’s suppose a ‘happy life’ is one that would be on balance very well worth living from the point of view of the person living it.  Is it good to add people with happy lives to the world?  This question divides into two more specific ones:  First, is it good to add happy people, in virtue of the good effects of doing so for us already existing people?  Second, is it good to add happy people, independently of any effects on the already existing?  The latter is by far the more intriguing.

The Canadian philosopher Jan Narveson famously answered this question in the negative.  He says, “We are in favor of making people happy, but neutral about making happy people.”  Whether this stance is correct has a wide range of practical implications for procreation, resource conservation, climate change, and existential risks.  Some of the implications are absolutely profound:  since there are very many happy future people who could exist, if morality were in favor of making happy people we’d have an overwhelmingly strong reason to promote the colonization of other planets by our descendants; we’d have very little if any reason to do this if Narveson were right.

But Narveson was blind to an important part of morality.  In addition to being about making people happy, morality is about making happy people.  By adding happy people we in one way make the world a better place, and we have significant reason to do so.  This significant reason would entail that we should add happy people, if there were absolutely no downside to doing so.  Of course, it may be that adding happy people to the current population of Earth would have serious environmental and social downsides, and be a bad thing all-things-considered.

Instead suppose I could push a button that would create billions of happy people living on several large and lush Eden-like planets.  These people would in turn produce further generations of happy people, who would do likewise, and so on… for the foreseeable future.  Pushing the button would cost me nothing, and do no harm or wrong.  Would it be wrong of me not to push the button, in this case?  Yes, I believe it would.

There is a range of arguments that philosophers have offered in favor of adding happy people.  I’ll sketch just two sorts of argument.

The most fascinating argument is based on a kind of skepticism about the moral significance of the boundaries between persons, according to which persons are at most mere containers of what really matters, happiness.  On this view, it doesn’t matter in and of itself where a fixed amount of happiness is placed.  Whether we put it in this or that container, or build a new container to put it in, is in itself irrelevant.  Thus, on this view, making people happy and making happy people are equally morally important, other things being equal.

There are different routes to such skepticism about the moral significance of the boundaries between persons.  One is purely metaphysical:  there simply are no separate persons; there are only sets of experiences.  There’s a set of experiences here where this blog is being written, and another set over there, and there, where it’s being read.  But there are no entities above and beyond these experiences, who have them.  This sort of view is advocated by Buddhists.  Another route is only ‘metaphysically-inspired’, and is consistent with the belief that there really are separate persons.  The idea here is that when we study certain challenging cases within the literature on the metaphysics of persons and personal identity, it appears very difficult to maintain the moral significance of these notions, either in general or within particular parts of morality.  For examples of what I have in mind, see the work of Derek Parfitme, Jake Ross, Tim Campbell, Rachael Briggs and Daniel Nolan, and Caspar Hare.  A paper in which I take a ‘metaphysically-inspired’ approach to adding happy people was included at recent conferences in San Diego and Oslo.

The most fascinating sort of argument in favor of adding happy people isn’t, in my estimation, the most compelling one.  Suppose we grant that it matters whether some amount of happiness is located in the life of an already existing person rather than that of a merely possible person.  Still, a simple and forceful thought is that adding happiness is good to some extent, wherever it’s placed.  It is very hard to deny the analogous claim about suffering:  that adding suffering is bad to some extent, wherever it’s placed.  Surely it would be bad to bring in a life of relentless and insufferable pain.  Several philosophers have attempted to defend the following asymmetry:  while it’s bad to add suffering by adding miserable people, it’s not good to add happiness by adding happy people.  None of these attempts looks promising.  But don’t take my word for it; have a think for yourself, and read some of the excellent work in this area, such as this recent paper by Melinda Roberts.  My sense is that it remains as baffling as ever to think that it could be bad to add miserable people but not good to add happy people.

Largely because I think the asymmetry can’t be defended, I think the world would in one way be made better, by the addition of happy people to it.  I believe this gives us very strong reason to colonize a variety of planets throughout the galaxy, bringing about trillions of happy lives.  How much reason?  I suppose about as much reason as we’d have to prevent trillions of miserable lives from coming into existence.

Share on

18 Comment on this post

  1. One possible objection to this (not necessarily one I subscribe to). Suppose:

    1 – We gain the technological means to create vast populations of what we understand to be happy people.

    2 – Our understanding of happiness is incomplete, probably because of a poor understanding of the brain (or whatever). We THOUGHT that people wanted to live on lush Edens, when really there’s this awesome form of happiness associated with being put in a metal box hooked up to electrodes and being electrically stimulated (or whatever). The vast resources associated with our initial colonization were misused and resulted in subpar happiness.

    This is an argument that, before we embark upon any relatively far-reaching, high-stakes happiness venture, we should understand more thoroughly what happiness is, exactly.

  2. I have grown to dislike the word happiness because it evokes narrow associations of cheerfulness, smiling a lot, and perhaps a dull kind of amicability. I like “subjective well-being” better because it avoids these associations and is better suited to include other, perhaps darker joys. But this is just semantics.

    I have two major concerns about making happy people: The first is negative externalities. Making eden worlds could cause more suffering as a side-effect, perhaps in sentient entities that are not persons.

    The second concern is fake utility, i.e. “happiness” that is actually suffering in disguise. It is very important to remember that these means-end justifications can be perverted very easily, and the practical processes by which the decisions are made can be very untrustworthy. Humans have not evolved to get this right!

    1. Thanks Hedonic Treader! Interesting points about happiness; my main claim is that it is good (to some extent, but not necessarily all-things-considered) to create lives with overall positive well-being levels (leaving it an open question what well-being consists in). I am sympathetic to your cautionary notes, just as I was to Caley’s. I don’t view these as concerns about my main claim, but about its practical implementation.

  3. I don’t understand the need to have a population ethics. Why is this of practical importance? Colonizing planets is an event so far off we can’t hope to effect it. Also, why have any confidence that one definitive population ethics exists?

    1. I want to add a clarification because the obvious answer to my first question is to consider climate change or nuclear war. But as a layperson I do not recall anyone discussing climate change or nuclear war ever mentioning population ethics. I just googled a paper by Anthony Millner that does, but without reading this blog I never would have found it, so I am still under the impression that policy decisions will be determined almost entirely by less systematized intellectual frameworks than these elaborate population ethical theories. For example, the Catholic Church says you should not use contraception and that probably influences population. This is the sort of morality I expect to matter.

      1. Thanks Alex! Yes, point very well taken on the practical relevance of population ethics. I wrote a blog post on this earlier in the academic year: https://blog.practicalethics.ox.ac.uk/2014/10/how-important-is-population-ethics/
        And I also organized a workshop on theory meeting practice in population ethics: http://www.populationethics.org/november-2014-workshop/
        There are a few other projects I know of that are attempting to bridge theoretical population ethics to the real world, e.g.: http://www.iffs.se/en/project/valuing-future-lives/
        You might be interested in John Broome’s book *Climate Matters* (which gets into population ethics). I agree colonizing planets is far off, but there are things we can do now to affect possibilities like these, for example by reducing existential risks.
        I’d be interested in any ideas you have on this front.
        Also, I am not sure that one definitive population ethics exists, but I have some confidence that it does for general metaethical reasons (very roughly, these questions feel like they have answers). I do not have a high credence in any particular theory just yet, though.

  4. Thanks for the links. Regarding affecting colonizing space, I don’t think we have any reason to rate the probability that we could colonize space at any particular value. It could be 50/50, 0.01 or literally zero. The sticking point is not that colonizing space is physically impossible-it’s clearly possible physically, but we know nothing about whether human societies will ever be able to do it. As far as I know, we don’t even know if we could colonize Antarctica. So since we don’t know anything, we should not plan for what are really just fantasies.

    I am more optimistic about trying to plan for the far-distant future given that we assume science stays the same (with some gamma-like discounting so we don’t go nuts). I would be interested to know if existential risk stays relevant as a concept under those assumptions.

    1. I suspect you’re massively underestimating our chances of colonizing other planets, certainly within the solar system. Mars, at least, seems very likely over the next century or two. Incidentally, Wikipedia tells me there are several thousand people living on Antarctica. Does that not count as ‘colonized’?

      1. With Antarctica my question would be if we will reach a point where folks are staying permanently. From what I can see the main reason to be there now is science. Sending folks to Mars like we sent them to the Moon is possible, but Antarctica is much closer and more hospitable and we apparently see no genuine colonization.

        I would also point out, by the way, that even if you could say something about probabilities, if humans evolve first into blob creatures before it happens, I don’t care!

  5. Just to build on what Alex says: like him/her, I think that it’s pretty unlikely that we’ll be colonising other planets in the near future (why would we? what can we get from those planets? we’d only do it if they could be self-sustaining, and they won’t be).

    More specifically, I don’t think we’ll start colonising other planets before the Earth population starts shrinking. In all developed countries (USA possible exception?), fertility is below replacement rate. It seems that as a matter of empirical fact, rich people don’t want to have two or more children. So that biological, Darwinian imperative to spread and make more people will disappear. If we are to go to the stars, then it will be purely out of interest, not because of some need for the resources of other planets.

    All this is not a comment on whether we should make more people, of course; just to say that one of the historical drivers of making more people is likely to stop soon, whether we think that’s a good thing or not.

  6. Not much to say, but I think you may have missed on important reference. The asymmetry you refer to at the end of the post has been thoroughly explored by David Benatar (see “Better never to have been” for example). As he believes (and gives compelling arguments for this to be the case) that this assymetry exists, he concludes quite convincingly that it is not moral to make happy people.

    1. Thanks Pierre! Yes, there is indeed a huge literature on the asymmetry. I merely asserted that no defense of the asymmetry is convincing; due to space constraints I couldn’t show that in the post. Benatar is no doubt a very good philosopher, but most people working in population ethics find his view to be one of the least plausible options. There are very good responses to him by Elizabeth Harman in a critical study of his book in a 2009 issue of the journal *Nous* as well as by Campbell Brown in “Better never to have been believed” (*Economics and Philosophy* 2011).

  7. glad to read your post, Theron. the problems i thought about when reading it were: firstly, isn’t making happiness the measure for whether a life is worth living very teleological and reductionist? e.g. there are people who have had miserable lives but have done interesting and admirable things, why should the many experiences in life always be tipped towards the happiness scale and judged by its standards? – basically i mean that emotions/experiences such as discomfort, rage, hatred, depression etc are generally evaluated too negatively;
    secondly, if you take the buddhist view as you cited, then the number of happy persons doesn’t matter because happiness is an impersonal experience and the individual self is empty/an illusion anyway. if you don’t take the buddhist view and think happiness is grounded in individual subjectivity, then the amount of happy people in the world doesn’t matter to each individual person either, because happiness would be a personal thing and different in each individual’s circumstances and not sharable with a mass of happy people. so why would the total number of happy people matter at all?

    1. Thanks Amy! These are good comments. On the first: I am using ‘happy people’ to refer to people whose lives are on balance worth living, and I don’t intend to commit to a narrow hedonistic conception of what well-being (that which makes life worth living) consists in. For my purposes, a happy person could be one who scores low on hedonic dimensions of well-being but sufficiently high on non-hedonic dimensions (e.g. intellectual, aesthetic, social, or athletic accomplishments). On your second comment: you’re right that on the Buddhist view it would be the total amount of happiness/suffering that matters rather than the total number of happy/miserable people, but plenty of people who reject the Buddhist view and accept that there are separate persons also believe that even if the total number of happy people doesn’t matter *to* each person, it may well nonetheless *matter, period* (or matter, “from the point of view of the universe” as Sidgwick said). Suppose I can save persons A, B, C, D, and E, or instead just person F. And suppose all six would have roughly the same quality of life and amount of life ahead of them. Many people believe, I think plausibly, that I should here save the five over the one, if other things are equal. It seems that the total number of good lives saved matters here, even if it doesn’t matter *to* each individual. There’s an interesting exchange between Taurek and Parfit about saving the greater number that is in part about “mattering to” versus “mattering, period”. See Parfit’s paper called “Innumerate Ethics”. Certain contractualist views, like Scanlon’s, famously struggle to say sensible things about saving the greater number. But Matthew Liao has a nice paper on how one can be a nonconsequentialist (and respect the “separateness of persons”) and yet still count the numbers, called “Who Is Afraid of Numbers?”

  8. if making happy people makes people happy, then why not? (excluding, like I think you suggested, overpopulation, underresource problems, etc, (and then if we were not excluding them, we would probably end up making not happy people?)

Comments are closed.