Written by Roger Crisp
Imagine two worlds quite different from our own. In Non-intervention, if a person becomes ill with some life-threatening condition, though their pain may be alleviated, no attempt is made to save their lives. In Maximal-intervention, everything possible is done to save the lives of those with life-threatening conditions.
Our world lies in between these two. In many countries, over the last year, a great deal has been done to try to save the lives of those very ill with Covid-19. But more could have been done. In the UK, for example, we could have doubled the percentage of GDP spent on the NHS by decreasing expenditure in other areas, such as welfare, pensions, education, and transport, or of course by increasing taxation.
How should we decide where to draw the line and stop saving lives? Those in Non-intervention might justify their hard-hearted approach by noting the huge costs of intervention to the well-being of the less ill members of the population – to put it in terms of current political terminology, they may be said to be prioritizing the economy over health. And of course those in Maximal-intervention will urge on us the duties of beneficence we owe to others, the rights of those in direst need, and so on.
These justifications are what might be called ‘person-affecting’, in that the interests of one group of existing people are allowed to trump the interests of another group. But the Non-interventionists can also appeal to impersonal considerations, arguing that the vast resources spent on keeping very ill individuals alive, especially given the effects of their illness on their quality of life, could be used to promote overall well-being more effectively through programmes to encourage the having of more children who would not otherwise exist.
This impersonal argument is rarely heard in current public debate. It is also quite contentious among philosophers. Some deny impersonal ethical principles entirely, while others accept a ‘hybrid’ view, according to which we must not bring into being individuals with an overall negative quality of life, but have no reason to bring into being individuals with a positive quality of life. But my guess is that there would be at least some sympathy for the argument among the general public, many of whom are prepared to pay certain costs (such as refraining from flying) in the hope of improving the prospects of future people in general, even if the very identity of those people depends on the actions we currently take. At the very least, if some much more dangerous pandemic than Covid-19 were to develop in future, it is an argument which probably will, and certainly should, be discussed more widely than at present.
” But the Non-interventionists can also appeal to impersonal considerations, arguing that the vast resources spent on keeping very ill individuals alive, especially given the effects of their illness on their quality of life, could be used to promote overall well-being more effectively through programmes to encourage the having of more children who would not otherwise exist. ”
If we had a workable quark-for-quark replication technology, would the argument equally support the claim that we should replicate sick people and let the original die if replicating them was a way of promoting well-being more effectively than trying to cure them?
And a side note: the entire notion of medical ethics is based on person-affecting premises, so the debate you’re calling for here could hardly take place within such a context.
Thanks, Géraud, for both of your comments. On the first one: Yes, if the view is a utilitarian one. Not necessarily if it’s not. For example, someone might think that there is a reason to bring about people with lives of positive value, but wrong to kill to do that. On the second one: If you mean philosophical medical ethics, then there are people who deny the relevance of person-affectingness — e.g. most utilitarians. If you mean the ethical thinking of medical professionals, then you’re largely right, but my suggestion is that they might want to reconsider.
This makes me think back to reading Parfit’s ‘Reasons & Persons’, where there is quite a session of aggregate population happiness problems. And something seemed deeply problematic there: you can no more add happiness than add up everyone having a mental image of a chair. And this also seems what passes by a little too easily in your phrase ‘overall well-being’.
The only way to a solution ‒ to find a meaning for aggregate population happiness (or such) ‒ seemed that you need to see that aggregate as a single coherent object with a moral status of its own. Because unless you can unify the basis of both ends of this (part and whole, individual and aggregate), you cannot solve it as a moral problem, you would need some *other* basis of evaluation as well. So you need a more abstract, generalised (panethicist) idea of moral status, something more like inclinations or tendencies, that you can imagine very non-human things (including aggregates) can have too.
When you construe the aggregate in that more mechanical way as having inclinations/tendencies (rather than ‘happiness’), it can be understood as made of components, which can be meaningfully assessed for how they contribute to the overall. And they need not simply add like numbers or an amorphous lump (like ‘happiness’ was supposed to), but more like parts of building, or lines of software forming an algorithm.
But then what is the ‘inclination’ or ‘goal’ of that aggregate, of humanity overall? That becomes the problem now. It seems difficult to say. And yet it seems not impossible …
Thanks, Harrison. Aggregation of happiness isn’t straightforward, of course, but it is something we seem able to do in our everyday life. Imagine that the local library has to repaint its reading room, and finds that 75% of people have a very strong preference to paint it green, and 25% a weak preference for blue. Other things equal, the green paint will produce more happiness. This group may have ‘moral status’, but only in so far as its members have moral status. There are some problems with the ethics of collectives, but I’m inclined to think they arise primarily at the level of action rather than that of value. (You have probably read Parfit’s chapter on ‘mistakes in moral mathematics’.)
Comments are closed.