How should we compare a decrease in average quality of life with a gain in population size? Population ethics is a rigorous investigation of the value of populations, where the populations in question contain different (numbers of) individuals at different levels of quality of life. This abstract and theoretical area of philosophy is relevant to a host of important practical decisions that affect future generations, including decisions about climate change policy, healthcare prioritization, energy consumption, and global catastrophic risks.
One of the central questions in population ethics is whether there is a satisfactory way of avoiding the Repugnant Conclusion, according to which:
For any possible population [called A] of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population [called Z] whose existence, if other things are equal, would be better even though its members have lives that are barely worth living (Parfit 1984).
Most people find the Repugnant Conclusion, i.e. the claim that Z is better than A, to be highly counterintuitive. Thus many take the fact that Total Utilitarianism implies that Z is better than A to count against it, and attempt to find an alternative view in population ethics that avoids this implication. However, as Derek Parfit (Reasons and Persons, 1984) and Gustaf Arrhenius (Population Ethics, forthcoming) have shown, it is difficult to avoid implying the Repugnant Conclusion without taking on board one or more other claims which are, in turn, highly counterintuitive. Many of the puzzles in this area begin by setting up a smooth spectrum of populations, ranging from one in which everyone is at a very high quality of life (as in A) all the way to one in which everyone is at a very low but positive quality of life (as in Z), but where adjacent populations differ only slightly in terms of quality of life.
Here is the simplest such puzzle. Start with population A. Next consider B, whose members are at a quality of life 99.9% that of the people in A, but which is 100 times larger than A. According to many, B is better than A. The slight loss in quality is, according to them, more than compensated for by the enormous gain in quantity. Next consider C, whose members are at a quality of life 99.9% that of the people in B, but which is 100 times larger than B. For similar reasons, according to many, C is better than B. And so on with D, E, F, etc. all the way down to Z. Thus we have a series of premises:
- B is better than A,
- C is better than B, …and so on, all the way down to…
- Z is better than Y.
If the relation “better than” is transitive, then these premises together imply that Z is better than A, i.e. they imply the Repugnant Conclusion.
Is there a plausible way to avoid this conclusion? Some defenders of person-affecting views will get off the bus early, denying the very first premise that B is better than A. According to them, what matters is “making people happy, not making happy people” – increasing population size does not as such count as an improvement. There are powerful objections to such person-affecting views, as well as ingenious attempts to get them to imply the Repugnant Conclusion. I won’t get into these issues here. I will instead take it as plausible that more is better; that is, one way to make an outcome better, at least if other things are equal, is by bringing into existence more people with worth living lives. Moreover, this is nontrivially better, such that just a slight decrease in quality is plausibly outweighed by a sufficiently large gain in quantity of lives lived.
There are three ways out of inconsistency: we can claim (1) that one or more of the premises is not true, or (2) that transitivity of “better than” is not true, or (3) that the Repugnant Conclusion is true. Solutions (1), (2), and (3) each seem implausible. However, some people at Oxford working in population ethics have recently offered interesting ideas about how to minimize the implausibility of going with solution (1). Derek Parfit appeals to the notion of imprecision in a paper in progress called “Can We Avoid the Repugnant Conclusion?” (given at the Oxford Moral Philosophy Seminar, podcast available here), and Teru Thomas appeals to indeterminacy in a paper in progress called “Vague Spectra” (given at the Oxford Population Ethics Project Work In Progress Seminar). By appealing to these notions, we can, arguably, ease the pain of going with solution (1). For now just consider indeterminacy (as it is more familiar than imprecision).
One might argue that indeterminacy arises in the puzzle case at hand, as within certain ranges it is indeterminate how quality trades off against quantity. A plausible response is that the premises in question all involve tradeoffs outside the range of indeterminacy, and insofar as it’s plausible that B is better than A, C is better than B, and so on, it’s implausible that it’s indeterminate whether B is better than A, C is better than B, and so on. However, is it less implausible to say this than to simply say (some of) the premises are false? Even if so, it is unclear it’s less implausible to a sufficient degree to make (1) the overall least implausible solution to the puzzle.
Next suppose that indeterminacy comes in degrees (or, if you like, we could carry out the discussion in terms of degrees of truth). Right now I’m not bald, but if you were to continually pluck hairs from my scalp, one by one, I would eventually be bald. Imagine a spectrum of scalps, ranging from my actual scalp to a completely bald one, where adjacent scalps differ by only one hair. We’d start out with 100% determinately not bald scalps and end up with 100% determinately bald scalps. There are many scalps in the middle that are indeterminately bald. But it seems plausible that they do not all enjoy the same degree of indeterminacy. Presumably there are some pretty hairy scalps that are 99% determinately not bald. It’s strictly indeterminate whether such a scalp is not bald, but it’s more determinate than whether a scalp with substantially fewer hairs is not bald (e.g. 60% determinately not bald).
If indeterminacy does come in degrees (or, again, if truth does), this opens up the door to offering solution (1) by claiming that there are at least some premises that are not 100% determinately true, but only (say) 99% determinately true. It’s dubious there’d be much intuitive advantage gained here if epistemicism were true, as then it seems there’d be no real indeterminacy, but only uncertainty (according to this view there is precise point at which I’d go from not bald to bald, it’s just that we can’t know what this point is). We could claim that not all of the premises are 100% certain, but then in offering solution (1) we would also be saying that there is a premise that is plain old false.
If there is real indeterminacy, and it comes in degrees, we could avoid the Repugnant Conclusion by claiming that there’s exactly one premise that’s only 99% determinately true. This is because the transitivity of “better than” applies only to “better than” claims that are 100% determinately true. However, one might offer a new transitivity principle to deal with degrees of indeterminacy; perhaps a plausible such principle would imply that if all but one of the premises were 100% determinately true – with the remaining one being 99% determinately true – then the Repugnant Conclusion is (at least) 99% determinately true. I take it that this would not be much of an improvement, from the standpoint of someone interested in solution (1) as a way of avoiding the Repugnant Conclusion. However, it does seem that if there were a large number of premises, and enough of them were only 99% determinately true, then there is no plausible transitivity principle that would imply the Repugnant Conclusion is determinately true to any significant degree. (Similarly, it may be that “as bald as” is 99% determinately true when applied to adjacent pairs of scalps in the scalps spectrum, though 100% determinately false when applied to the first and last scalp).
I’ll end with some questions. Does invoking indeterminacy (or some related notion) in this way help solve our puzzle? Is saying that each premise is only 99% determinately true substantially less implausible than simply saying that there is a false premise? Does this yield the least implausible solution to this particular spectrum puzzle about the Repugnant Conclusion? And should we be optimistic that this kind of solution will help resolve other puzzles discussed in population ethics?
(Many thanks to Tim Campbell, Andreas Mogensen, Hilary Greaves, Toby Ord, Jeff McMahan, John Cusbert, Caleb Ontiveros, and especially to Teru Thomas, for helpful discussions.)
In my experience, while the Repugnant Conclusion is academically interesting, it is not much of a problem for real-world choices.
For example, laypeople discussing it often confuse “lives barely worth living” with “lives in absolute poverty” and then conclude nonsensically that we should build a “Giant Slum World” that are basically filled with third-world-people. But even if lives in a Giant Slum World were worth living (which is doubtful), there is no guarantee that small reductions in population size could not provide large gains in average quality. That is, the progression of trade-offs between worlds A to Z does not have to map onto our actual options in reality. Perhaps, in reality, an intermediary world G is optimal, where average lives are not as good as they could be, but well above barely worth living.
Imagine a world that is filled with “artificial utility monsters”, that is, beings deliberately designed to have the best possible resource-to-pleasure ratio. In such a world, average utility would be well above barely worth living. It may still be possible to create a world with more beings in it, but since these future beings are already designed to have the best resource-to-pleasure ratio, total utility would decrease. This would be true even if the resulting larger population had lives worth living. One explanation could be that optimality is a certain amount above subsistence, or that adding more beings increases system failure risks, such as pandemics, disproportionately above a certain threshold.
In laypeople’s discussions of the RC, you see absurd misrepresentations. I’ve seen people claim that total utilitarianism “concludes” that we should “just make trillions of really unhappy people”, which is of course absolutely not what it “concludes”. Academica is interesting, but when it seeps into politics, never underestimate the resulting stupidity.
Thanks Hedonic Treader. Even if we do not face the choice of bringing about the Z population, the Repugnant Conclusion is indirectly relevant to real world choices in that it provides a test of the truth or acceptability of theories governing tradeoffs of quality and number that do directly apply to real world choices. You are correct that many people misunderstand “barely worth living”, but my sense is that most people, even when they are imagining the lives in the Z population to be worth living (i.e. not miserable, unhappy), would judge the Repugnant Conclusion to be implausible.
Thanks for your response. I’m still quite skeptical of the applicability of the Repugnant Conclusion to real-world choices; the contexts where I see it mentioned do not map closely onto the theoretical argument.
But aside from this, if people correctly understand and still reject it, a decent default hypothesis is that they are simply being scope insensitive and following an affect heuristic. After all, we know from psychological research that scope insensitivity is the default for most people, and it takes active effort to counter it.
Thanks for this continued discussion. There seem to be several real world contexts where we need to know how to weigh quality of lives lived against number of lives lived (or do you disagree with *that*?). The Repugnant Conclusion could then be *indirectly* applicable, in that it would help us test certain theories about how to weigh up these things (such as total utilitarianism). On your second point, do you know Michael Huemer’s paper “In Defence of Repugnance”? There he makes a similar claim about scope insensitivity. I believe that even if we do suffer from scope insensitivity, it remains an open question whether this fully accounts for our finding the Repugnant Conclusion to be implausible. In a short paper called “Intuitions about large number cases” I offer some reasons in favor of the prediction that, even if we weren’t scope insensitive (and could imagine very large quantities and differences between them), we’d still find claims like the Repugnant Conclusion to be implausible.
Hello Theron, you are right there are real-world decisions where trade-offs between quantity of lives (or total experience time) and quality are relevant. If if there weren’t now, at some point there would be.
I’m not sure we currently have a good quantitative method to do this – how do you weigh agony against other experience, how do you deal with adjustment effects and neurodiversity? And I’m additionally sceptical that the Repugnant Conclusion meme is more helpful than confusing, perhaps not in highly informed academic discussions, but certainly at the intersection with more popular or political discussions.
It’s probably true that many would reject the Repugnant Conclusion even after correcting for scope insensitivity and all the misconceptions – after all, there is a broad diversity of values humans accept or reject. But I think consistency is harder to achieve then, and the repugnance is no longer obvious (to me, anyway).
Thanks for mentioning the papers; I will read them.
Thanks – I find myself sympathetic to most of what you say here.
Thanks for your interesting post, Theron. My small increasingly slow brain had to read it several times before coming close to understanding it, so I came up with a sort of analogy :
“For any possible population [called A] of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population [called Z] whose existence, if other things are equal, would be better even though its members have lives that are barely worth living”, writes Parfit.
“For any possible string quartet composition [called A] of at least 1000 bars, all of very high quality, there must be some much larger imaginable composition [called Z] whose existence, if other things are equal, would be better even though its bars are barely worth listening to.”
So a short quartet by Schubert is inferior in quality to a very, very long one written by myself …
My intuition tells me that this sort of discussion is somewhat ridiculous, because aesthetics is not governed by quantity.
Is my analogy completely mispaced or unfair, and would I be wrong to conclude that mathematical approaches to ethics so beloved by utilitarians are equally besides the point in the search for the good life ?
Note that listening to a quartett has both real costs and opportunity costs. And the degree to which individual tunes are worth listening to is not independent from perception of length, etc., for example because people tend to get bored.
If we imagined a listening experience where this somehow doesn’t apply, you could bite the bullet: If you feel amazing during a short listening experience, but you could feel half as amazing for three times as long during an alternative listening experience, all else equal, the latter can be seen as better.
Of course all else is not equal in the real world.
Thanks Anthony. I am not sure what you would count as a “mathematical approach” to ethics. Suppose one said, “if other things were equal, it would be worse if *more* people suffered.” Is that person taking a mathematical approach? Or are you thinking it’s only when there are tradeoffs that it becomes mathematical? E.g. with the claim that if other things were equal, it is worse if 1000 people each suffer a slight bit less than 10 people each suffer a slight bit more. At any rate, I am not sure how one can plausibly do ethics without addressing these (and other) sorts of quantitative issues. (Maybe the prospects for an innumerate aesthetics are better, though.)
Thanks for your reply, Theron.
I was of course exagerating a little : I have no problem with using quantities in ethical propositions such as “killing a million people is worse than killing ten” (I wonder even if I need to add “other things being equal” – therein I think lies the difference between our views). I guess I must be amongst those who hold person-affecting views (though I’m not completely clear as to what counts in being one of those).
Further, my intuition is that to use quantitive methods gives the illusion that there are clear answers to ethical questions, just there waiting for a limpid and elegant proof to be discovered. IMHO, this is not the case.
I’m not against numeracy, by the way, just against a confusion of genres. I’m happy that we seem to agree that aesthetics can do without it !
> Does invoking indeterminacy (or some related notion) in this way help solve our puzzle?
No, I don’t think it does. Suppose you have 100 steps in this sequence of increasing populations with decreasing quality of life, and each one is preferable to the one before it with 99% determinacy. On average, one of these steps fails. If you want the first step to be preferable to the last step with high determinacy, then you need to have it possible for one step to be so bad that it overwhelms all 99 good steps. The indeterminacy at each step has to be something like 99% slightly good and 1% extremely bad, so that the expected value is negative, in order for that to work. But this seems like a very strange indeterminacy condition. I don’t think trying to evade transitivity is a promising class of solutions.
Thanks very much for the comment (and to Theron for writing this post). I think I might be misunderstanding your last sentence, but the point of the approach is certainly not to deny transitivity. The view I have in mind can endorse transitivity, in the sense that it is determinately true that if A is better than B and B is better than C, then A is better than C. (I know Theron writes of “a new transitivity principle”, but I wouldn’t put it like that. The kind of principle that’s needed is one about how indeterminacy behaves under logical inference, the kind of principle that we need anyway.)
As for the earlier part of your comment, I’d certainly concede that one would like to see some particular model within a particular theoretical context, to see how plausible it looks. I can make some sense of what you say by using a quasi-probabilistic theory of indeterminacy, but I’d be interested to know why you think the condition is very strange. Could you please say more?
(I’d also note that the context here is one in which many people feel that they have to accept something very strange, or indeed repugnant. The argument is mainly addressed at such people.)
Isn’t the repugnant conclusion partly a function of too simple a model? If welfare, W, and population, N, are related as W=f(N) where f is monotonically increasing, then sure, increasing N increases W. But (1) f need not be monotonically increasing; (2) most the ethics conversations I hear about welfare uses either individuals or distributions as the entities to be compared, not aggregates.
On (1), is there any reason to suppose that dW/dN>0 everywhere? I can’t think of any general reason why this would be true. Imagine a lifeboat with capacity for 99 people. It is full. Add another person. Boat sinks. W(99)>0; W(100)=0, so dW/dN<0 at that carrying capacity limit. So it seems to me that the plausibility of the repugnant conclusion depends on the ecosystem model you're using. In some worlds (presumably those without resource constraints) it might be fine. But in constrained worlds of various sorts, it doesn't seem to make as much sense. (Even where the functional form is fixed, if the parameters vary you might encounter changes from repugnance being plausible to it being implausible (I think – I have in mind May's 1974 logistic map paper).)
On (2) higher order conditions could account for either distributional or individual intuitions that conflict with the intuitions that some people obviously have which leads them towards the repugnant conclusion. If you pick an individual (e.g. the middle one, or the back-marker) or pick a preferred distributional property (minimised variance, or something else) and then calculate the average, or maximin, or statistical variance (or higher order moments) and then discuss these as part of the (ethical) functional form you are maximising or minimising, or optimising. Otherwise it seems to me that this conversation about overall welfare in population ethics cannot be squared with all the other conversations in welfare ethics/policy, few of which turn on overall welfare. Squaring all the welfare constraints embedded in those revealed preference real world policies with the abstract maximising aggregate welfare goal that underpins the repugnant conclusion might be hard.
Thanks Dave. On (1), I agree that it’s not true that dW/dN>0 everywhere. But the Repugnant Conclusion doesn’t depend on the contrary being true. If the much larger population contained lives at zero quality, then that simply wouldn’t be population Z. The Repugnant Conclusion is making an evaluative comparison of population A and population Z. Not sure I follow your point on (2), but would it help to observe that we could set things up such that all the populations in the spectrum argument leading up to the Repugnant Conclusion (as well as the two compared in the Repugnant Conclusion) are “tied” with respect to distributional properties? (They could contain perfect equality of well-being, or quality of life, across separate persons.)
Thanks Theron. Sorry if I’m getting confused about just what is being claimed.
The argument seems to me that N(B)=e1*N(A) and mean(W(B))=mean(W(A))/e2, and if e1>>e2, where N is population and W is welfare, in worlds A and B, and e1 & e2 are constants which increase population and decrease welfare respectively. The relationship between e1 &e2 is doing the work here,isn’t it? This seems to be where you’re going when you say “Moreover, this is nontrivially better, such that just a slight decrease in quality is plausibly outweighed by a sufficiently large gain in quantity of lives lived.” ie repugnanters find the loss in average welfare palatable because it’s small compared with the increase in net welfare.
But in real populations there will be a ceiling where e1>>e2 fails to hold, i.e. there’s an upper bound to the argument. Assume (reasonably enough) that real populations follow logistic curves. That is, they are sigmoid, and tend to a carrying capacity (K) because resource constraints become binding. at N=K, for a new life to be viable, an old life has to end. For baby to live, granny has to die. That’s a zillion miles from e1>>e2. And if you are at N=K, and you make the next step N(t+1)=N(t)*e1, then N(t+1)>K, so N(t+2)<N(t+1), ie more people are starving than being added in the second period after the increase; so dW/dNK, which I think violates the condition that Welfare is monotonically increasing in N. I don’t think you can assume away this objection by saying “I’m not talking about that world; I’m talking about one of the same N in which dW/dN>0”, because there are no such worlds for that N, K combination. [You could theoretically outrun this objection for a time if dK/dt>dN/dt, but at some point it will return as long as constraints apply somewhere, since if you turn all the earth’s biomass into people they won’t have anything to eat.]
Basically I’m saying I don’t see how the condition that e1>>e2 can be scalable in N in real-world, ie resource-constrained settings. But then it’s possible I’m completely misunderstanding it.
(Sorry, my comment below was meant to be a reply to this.)
Maybe one thing to bear in mind is that we don’t have to keep the total resources or carrying capacity fixed as we move from one scenario to another. We may as well be talking about populations in a sequence of different universes, so that the Z-universe has a vastly larger carrying capacity than the A-universe.
Besides that, the carrying capacity of even our own universe seems rather large, so that something like the Z-population could exist in our universe (not necessarily on one planet or even one galaxy!).
Teru wrote: “the carrying capacity of even our own universe seems rather large, so that something like the Z-population could exist in our universe (not necessarily on one planet or even one galaxy!)”
Sure – Earth is for losers. I agree that there’s no reason to assume fixed K (this is one of a number of problems with “limits to growth” thinking). But there’s no general reason to assume that growth in K necessarily outstrips growth in N. It has since WWII, but it may not always be the case. (Solving problems for civilisations that face no resource constraint seems less urgent to me than solving problems for those civilisations that do face resource constraints.)
But I think my point was that you observe the relationship between e1 & e2. Assuming it is fixed for all N (which seems to be the way the problem is always set up??) begs questions that seem important (to me, anyway).
Thanks Dave. I’m sorry if Teru and I are misunderstanding you, but I’ll just note that we’re *not* claiming that whenever you have a population the size of the Z population that all the lives in it will be worth living; clearly that is false. We’re only claiming that there’s a *logically possible* “Z-situation” in which there is nothing but zillions upon zillions of barely worth living lives (I take it you’d not deny that this is logically possible, right?). The Repugnant Conclusion says that that possible situation would be better than the (also logically possible) “A-situation”. And that’s counterintuitive. But since this conclusion is implied by plausible premises, we have a puzzle.
Thanks Theron – that’s a helpful clarification. I’m not sure whether those worlds are logically possible or not (I think those worlds are under-determined – if the normal laws of biology and resource constraint are assumed away, what else is going on in them?).
It’s probably a gap in my education, but I don’t see why those worlds are interesting – in real ecosystems welfare is inversely related to population towards some limit (carrying capacity), whereas the mere addition thing supposes that the relationship between the two are independent, and the relationship between them is unvarying. I’m struggling to see what’s interesting about the repugnant conclusion, given that it seems to rely on such a baseless assumption (by real world standards) in the way the problem is set up.
Even if there were one possible Z-situation that were preferable to all the possible A-situations, this wouldn’t strike me as very compelling or interesting – imagine there are a million possible Z-situations, one of which is preferable to the A-situation. Imagine we are crap at strategic planning* and hence have only a random chance of finding it. Then risk aversion would suggest that we prefer A to Z (unless the Z situation were more than a million times better than the A-situation… and I guess since you’re philosophers and all you can just keep repeating the mere addition thing until that is the case… which means this weird fantasy population thing can sustain itself for at least a few hundred more irrelevant publications.)
*In contrast to most the assumptions on this thread (including mine) this is a splendid one.
Thanks Dave. The Z-situation, like very many biologically unrealistic scenarios, can be spelled out and understood in a contradiction-free way. The bar for logical possibility is very low.
On your question of “interestingness”: logically possible situations we’ll never face can provide important tests for principles that bear on real-world situations we do and will face. For example, even if in every situation I will actually encounter egoism (the view that an agent ought to maximize her own well-being) implies I shouldn’t kill an innocent person to get her purse (because I’d get caught), we can see that the view is implausible by considering (as Plato did) the possibility of wearing a ring of invisibility. In that hypothetical situation, I wouldn’t get caught, and so egoism implies I should kill for money since that’s what’s best for me. But I shouldn’t do that, so egoism is implausible. So now we have learned from a purely hypothetical scenario that a view that bears on real-world situations is implausible.
Similarly, as I said earlier in response to Hedonic Treader, even if we do not face the choice of bringing about the Z population, the Repugnant Conclusion is indirectly relevant to real world choices in that it provides a test of the truth or acceptability of theories governing tradeoffs of quality and number that do directly apply to real world choices. Hopefully this helps partially explain why so many philosophers have taken the Repugnant Conclusion (and other outlandish cases) to be interesting.
Comments are closed.