Skip to content

More Demoralizing

Readers of this blog may remember a contribution by me on ‘Demoralizing Ethics’ earlier this year. It set out some arguments (from a paper on religious pluralism) for, at least initially, avoiding moral concepts and language in ethics. These arguments were based on parsimony and on avoiding emotional distortion, and outlined a demoralized ethical approach based on well-being or welfare.In a paper presented in June at the Uehiro-Carnegie-Oxford conference on the ethics of cell and gene therapy, on the basis of those earlier arguments, I sketched the following case for demoralizing welfarism using some thought experiments involving different worlds with different numbers of individuals and different psychological histories.

Imagine a causally isolated world, inhabited by a single, intelligent, mentally and physically healthy, rational human being. The world contains resources for survival. If this individual does not act, they will die. So what should they do? In the short term, of course, they must provide themselves with sustenance and shelter. But why should do that? To advance their own well-being. So one relevant consideration is what well-being consists in. If hedonism is correct, then the rational way for this individual to act will be to maximize the balance of pleasure over pain, across the rest of their life as a whole. But if well-being is constituted by more than pleasure, and, say, accomplishment or achievement also matters, then they may, given that they have the talent to do so, be better off producing some impressive work of art, even if that is overall hedonically costly.

Well-being, however, is not the only relevant consideration here. From the prudential point of view (and the example is of course intended to rest on some conception of that point of view), personal identity is also relevant. Now consider two variations on the example. In the first (World 1), the experiences of the individual over time will be continuous and very tightly connnected over the whole of their biological existence. They remember everything, their beliefs and desires do not change, their actions all emerge from previous intentions, and so on. On anything like a standard Humean, psychological, or reductionist position on personal identity, the rational strategy for this individual at any time is to act so as to maximize their well-being across the rest of their mental and biological life. Animalists will agree, as will those who accept anything like a ‘Cartesian’ view, according to which personal identity depends on the continuing existence of the soul. Now consider a very different case (World 2). At the end of each 24 hours, the maximal number of the individual’s individuating beliefs, desires, intentions, and so on, that can rationally change do so. On one day, they continue to believe that the sky is blue, for example, can remember how to think, and naturally feel thirst and hunger. But they remember nothing of the previous day, have quite different dispositions (are, say, cheerful rather than phlegmatic), form entirely new projects, and so on. If we also assume some lack of continuity of experience during sleep between one day and another, many reductionists will claim that the person on the previous day has been replaced by a new person, or at least that the radical lack of connectedness has implications for the degree of concern the first-day-person should have for the second-day-person (especially if we assume that the first-day-person is aware of the relevant metaphysical facts in their world). The animalist and the Cartesian, however, may be more inclined to think that the first- and second-day person are identical.

In World 1, where nothing changes, all that matters is which theory of well-being is correct, on the plausible assumption that the individual has a reason – indeed overall reason – to promote their own well-being. In the second world, more may be required: the truth about identity over time, and the truth about whether rational egoism is correct or whether there are reasons to promote the well-being of other persons that may on occasion override egoist reasons. If the individual in World 2 has knowledge of the metaphysics, they might have an opportunity towards the end of a day to take steps to make the existence of the next day’s individual more comfortable, steps which would be overall costly to today’s individual to take.

I have described one world, World 1, in which most would agree that what matters is only, or primarily, the correct theory of well-being. That world is far from our own, in which individuals are in a constant state of flux. In World 2, two more things matter: the correct theory of personal identity, and whether there are non-egoistic reasons of sufficient strength on occasion to override egoistic reasons. It may be tempting to think that, on, for example, a reductionist view of personal identity, World 2 requires us to ask ‘moral’ questions. Given the mental switches, for example, would it be wrongof the earlier individual not to take steps to help the later? Is it required by morality that they do so? Would it be generous, or kind? Do they have a duty, and, if so, are they blameworthy if they do not help?

I suggest there are strong reasons to resist this temptation. The practical questions raised for the agents in these worlds can be fully answered without using moral terminology, and I believe that they should be so answered. On the face of it, it is hard to see what more these agents need. The answers to their questions about what to do are fully answered by a position which provides complete accounts of well-being, personal identity, and reasons for action. The view that these are always sufficient to answer any practical question might be described as welfarism: all that matters is well-being, who gets it, and how much. Practical or ethical decisions, then, are best seen as ultimately distributive, where the only distribuendum is well-being.

Share on

2 Comment on this post

  1. Richard Yetter Chappell

    One question that might not be answerable without moral terminology is how the agent should feel about the (in)actions of the previous day’s agent. Supposing the previous day’s agent could have significantly aided the present person, but instead totally disregarded their present interests, do they warrant resentment (or other negative reactive attitudes)? What if the previous person was outright malicious, and laid traps to deliberately harm them? To answer these questions, it seems like we need to think about whether those past actions were wrong. (Or perhaps it is the other way around, and in determining that blame or resentment is warranted, we have thereby determined that the acts were wrong.)

  2. Thanks, Richard. Good point. The reactive attitudes might have to be at least ‘mentioned’ if not ‘used’ if one’s faced with the question how one has overall reason to act in light of facts such as those you mention. But one might state that question without reference to wrongness: e.g. ‘how should I act in light of the fact that anger towards this person who’s disregarded my interests would be appropriate’? My own view is that (a) there are no norms of appropriateness for the emotions, and (b) even if there are, in the case of the negative emotions there are welfarist practical reasons to repress them as far as possible in one’s own case. Tyler Paytas has a great paper on this in ETMP: https://link.springer.com/article/10.1007/s10677-022-10317-5.

Comments are closed.