Skip to content

Giving priority to good people

It’s an axiom of healthcare prioritisation that all persons should be treated equally.  Different theories of prioritisation give different interpretations of this idea; but the basic thought is the same across all plausible theories of prioritisation.  All persons’ lives are of equal value, so if it’s one life against one thousand, one should save the thousand.  But if it’s one life against another, then one shouldn’t have a preference between the two.

 

However, though this sentiment is correct, its phrasing can give rise to a confusion: though the intrinsic value of each person is the same, the instrumental value of each person can be very different.

 

Suppose that I discover two drowning people.  I am able to save the life of one of the people, but not both, so I have to choose who to save.  They are both the same age, have had the same history, and have the same life expectancy, the same expectation of disability, and both are strangers to me.   Is that enough for me to know that I shouldn’t have a preference to save one over the other?  Standard accounts of healthcare prioritisation would say so, as the benefit to each person is the same.

 

But now suppose that we know that one person is a medical researcher, on the cusp of making a radical breakthrough that will result in tens of thousands of lives being saved.  The other works on a factory line, and will produce comparatively little in the way of social impact.  Does this change the situation?

 

Given our starting axiom, it has to.  Our life-or death situation is not, now, a choice between saving one life or another.  It’s a choice between saving one life – the life of the factory worker – or saving thousands of lives – the life of the medical researcher plus all the lives that that medical researcher would save.  The lives that the medical researcher would save might be causally more distant and therefore less salient – but that does not make them less important.  So I should save the medical researcher.

 

That is, when making life-or-death decisions – as happens when we make decisions about healthcare prioritisation – we can’t just look at the intrinsic value of each life.  We have to look at the instrumental value the person has as well – the good that that other person will go on to achieve.

 

Currently, this is not properly taken into account by healthcare services, or by health economists that rely on the unit of $s per Quality Adjusted Life Year as a metric for deciding between different health interventions.  But this idea could be taken into account, and it should be.  Health economists are already able to assess the social impact of cures of different diseases.  We could do similar assessment for the social impact of different forms of employment; and greater healthcare resources could be devoted to those whose employment has a greater impact. Similar reasoning would motivate prioritising the young over the retired, as they will, in general, generate a greater social impact.  The same thought could also apply to the distribution of educational resources.  All other things being equal, the more talented and hardworking children will produce greater social value in their life than their peers.  So, all other things being equal, greater educational resources should be invested in them, because society as a whole has more to gain from doing so.

 

Valuing not merely the person under consideration, but also the lives of those who that person will go on to affect, is a simple consideration.  But there would be huge ramifications for resource prioritisation if we seriously took this consideration into account.

Share on

13 Comment on this post

  1. Anthony Drinkwater

    Thank you for this enlightened post, Will.
    You are of course absolutely right in your analysis, but unfortunately do not carry it to its logical conclusion.
    In the case of health it is clear that as resources are limited we should arrange for hospitals to have spare beds, experienced surgeons and operating tables always available for on the cusp scientists and high-impact philosophers : if these resources were already being hogged by factory workers, single mothers or backward children we would have a very negative social impact.
    Factory workers/single mothers/backward children should only be admitted if the number of spare beds/surgeons/theatre time exceeds a certain calculated percentage of total beds (space unfortunately forbids me to give the exact formula here).

    1. Anthony Drinkwater

      PS I’m still in the hit-man business by the way. But as my targets are exclusively « bad people » with a negative social impact, I hope you’ll belatedly accept that I’m at least as virtuous as a merchant banker. (I target very few factory workers, incidentally : they’re usually dealt slower deaths – exhaustion, redundancy, depression, disease…)

      1. I’m not as squeamish as Antony. I often cheer the good guy (e.g. Clint Eastwood, circa 1975, portraying a member of the San Francisco Police Department) when he wastes the (zero-uncertainty*) bad guy in action films. I’m a bit reluctant to cheer if the “good guy” wastes people on the expectation of future personal badness. I’m completely disinclined to regard as good someone who wastes people on the grounds that other people with similar characteristics might statistically be expected to do some future badness (e.g. Los Angeles Police Department members, circa 1990, portraying Clint Eastwood).

        *I think uncertainty dominates the decision making here – pretty much any society that’s actually been asked (ie democracies) rapidly arrive at the idea that false positives (wasting innocents) are to be avoided more than false negatives (letting baddies go).

        Note that I don’t think Will is actually advocating anything as sinister as the Los Angeles Police Department, circa 1990. My point is that statistically compelling interpretations of past group goodness/badness, when used to condition expectations regarding future behaviour, amount to systematic injustices towards individuals. This does seem to be the logic he argues for in the medical researcher case.

        1. Anthony Drinkwater

          Thanks, but I don’t think I’m too squeamish, Dave – in my line of work it’s considered gross misconduct.

          As for Self’s logic, its strength lies in its broad sweep and its deceptive simplicity, reminiscent of the best of de Selby or Botul.
          1. Persons should be treated according to their worth
          2. We all have the same intrinsic worth so what distinguishes one from another is their instrumental worth, their capacity for instrumental good : a good person maximises, or will in the future maximise, the well-being of others.
          These are clearly self-evident truths and Self is quite right to treat them as axiomatic
          As it could be objected that we cannot with certainty know the future capacity that a person has, Self suggests
          3. the notion of probable capacity to maximise well-being suffices to judge a person’s instrumental worth
          Economists and others use the notion of probability, so this is also clearly a Good Philosophical Idea.

          My only complaint is that Self limits his discussion to health provision, although he does mention education fleetingly.
          I propose that we have a moral duty to prioritise not only health and education but all other scarce resources on the same basis : examples would be housing, use of the motorway fast lane, water, Glyndebourne tickets, clean air, supermarket queues….

          In order to put this into practice it will be necessary to implant microchips at birth, pre-programmed according to criteria defined by a subcommittee of Oxford ethicists (aided by an ad-hoc group of Nobel Prize-winning economists).
          This happy marriage of technology and high-impact philosophy will ensure that only the good get priority and we will all live happily and ethically ever after.

          1. Anthony Drinkwater

            Brainstorms happen : for “Self” above, please read “Crouch”.
            Humble apologies to both Wills.

  2. I don’t understand this logic, you will save the medical researcher who produces more social progress which engenders more people(many of whom may work in factories) Yet you neglect the value of the factory worker, who, if anyone doesn’t know, could write like pushkin on the time and play beautiful violin. Are you going to just coldly and numerically assign value to people because they can save more (mostly average lives) while at the same time being arrogant to what may be an ordinary and average life? I would just save the person who most immediately appeals to me.

    anxiousdelusions.blogspot.com

  3. Lots of people share the intuition that not everyone’s claims to scarce resources like medical interventions are equal. Few people would approve of a liver transplant going to a rapist or murderer ahead of someone with an otherwise identical claim who has led a blameless life. But I think the common intuition is that people remove themselves from considerations to which others are entitled by bad actions. Fewer of us have the intuition that things such as liver transplants ought to be decided on the basis of potential for the good, and I think this has at least four components:
    (1) talent/potential for doing good is in large part exogenous (ie unchosen) and hence not the sort of thing we usually like rewarding;
    (2) it would reward those already living privileged lives (like Oxford students/post-docs) at the expense of people living less priviliged lives (like their separated-at-birth twin living in a sink estate in Swindon) in a world with unequal access to the benefits of education;
    (3) conceptions of the good are contested;
    (4) calculating potential for the good is grossly uncertain.

  4. This sounds appallingly dangerous and unfair. If only as far as consequences are concerned, you have absolutely no way to rule out that the medical researcher will use his potential to do good. He might turn – he might already be but you just don’t know – into an evil person and choose to use his findings to kill thousands of people. He might be hired by a despot to design biological weapons. He might be a terrible husband, father, and friend, making all the people around him frustrated, unhappy, or depressed. By contrast, the factory worker might be a wonderful person, cheering up and helping kins, neighbors, and strangers. He might even breed children who will go to Oxford and change the world, while the medical researcher will breed obnoxious Wall-Street yuppies squandering their money in sport cars and fancy hotels.

    Unless you can factor all that in, current policies seem much fairer. Note by the way that in countries where only the well-off can afford extensive healthcare, factory workers are already less likely to have access to equal opportunities. Furthermore, as noted in comments, the factory worker is already at a (likely undeserved) disadvantage, which you might want to compensate for.

  5. Hi there, thanks for comments!

    Dave – great post! You’re right that people would have stronger views about liver transplants for rapists. There seem to be two factors at work: i) whether the person produces notably good outcomes, or notably bad outcomes; ii) whether those outcomes have happened already, or are in the future. I think both are relevant to the intuitions.

    I’ll go through a few of the considerations raised.

    First, there’s one response that is often raised and is definitely mistaken: namely that we can’t ever know whether the medical researcher or the factory worker will do more good, so we should treat them the same. The reason it’s mistaken is that we don’t need to know. We can just assign probabilities to the different possible outcomes, values to those outcomes, and then compare the expected benefit of saving each of those lives. We might not ever be certain that the medical researcher will save more lives than the factory worker – but we can have good evidence that makes it extremely likely that they will. (An analogy: we’ve heard that there will be a terrorist attack, and we see someone with a bomb strapped to them entering a crowded area. We don’t know that they’ll do harm: but we have good reason for believing that they will; and that’s sufficient grounds for restraining them.)

    Second, there’s a response that’s good as far as it goes, but still ultimately mistaken. This is that the factory worker is typically underprivileged compared to the medical researcher. This is a good point because it certainly give a reason, as far as it goes, to benefit the factory worker rather than the medical researcher. However, the argument in favour of saving the medical researcher wasn’t to reward the medical researcher, or because we were claiming that her life is more valuable than anyone else’s. Rather, it’s because the medical researcher will save many more lives in the future – including the lives of the destitute. What looks like a life vs life comparison really isn’t – it’s a one life vs many lives comparison. Similarly if we are benefitting ‘Oxford postdocs’ rather than their ‘sink estate twin’ – it’s not for the postdoc’s sake that we benefit her. It’s because (as stipulated for the case to work) saving the Oxford postdoc will do much more to improve the world (such as by improving the lives of those living in poverty) than saving the person living in the council estate.

    The real meat is here:

    (3) conceptions of the good are contested;
    (4) calculating potential for the good is grossly uncertain.

    However, there are some elements of the good that any plausible moral view agrees on – like that making people better off (by their own lights) is a good thing, and that the more you make people better off, the better. And, though calculating potential for the good is highly difficult, it’s not impossible. Economists do it all the time when recommending one policy over another (through cost-effectiveness analysis or cost-benefit analysis). There’s no reason why they couldn’t do the same for people – at least at the level of granularity of different professions.

    1. Hi Will,

      thanks – I like the fact you’re challenging the sort of easy/complacent egalitarianism that’s often assumed in the social sciences (though harder to find in real societies). I think you’re working with some fascinating issues. But I think there are still some points I’d make in response:

      “First, there’s one response that is often raised and is definitely mistaken: namely that we can’t ever know whether the medical researcher or the factory worker will do more good, so we should treat them the same. The reason it’s mistaken is that we don’t need to know. We can just assign probabilities to the different possible outcomes, values to those outcomes, and then compare the expected benefit of saving each of those lives.”

      (1) “Doing good” is not limited to saving lives. Saving lives is a small subset of doing good. For your justification to hold you really ought to consider the full spectrum of moral behaviour. (2) I think (?) you’re assuming that the premium is not already there. We pay taxes/give shedloads in charitable donations to subsidise the careers of medical people (incl researchers). Is there an argument that these subsidies are currently too low? Effectively, that’s what you’re arguing (that we under price the medical researcher vs the factory worker (who makes her way in the private sector/surplus producing world rather than in the surplus gobbling world of academia…)). (3) By treating these folks probabilistically you’re saying their *actual* moral footprint is irrelevant; it’s their membership of a set that matters. I think that’s completely wrong-headed, for the same reason I oppose special group-preferential treatment in other settings (blunt forms of affirmative action, etc). I don’t see how a probabilistic justification based on group comparisons is fair to individuals. [Pet hate of mine – I’ve spent many years being lectured about my responsibilities to the world’s poor on the basis of my citizenship by people who have led immensely more privileged lives than I, but whose citizenship is conveniently rugged.]

      “What looks like a life vs life comparison really isn’t – it’s a one life vs many lives comparison. Similarly if we are benefitting ‘Oxford postdocs’ rather than their ‘sink estate twin’ – it’s not for the postdoc’s sake that we benefit her.”

      This depends on how closely tied you think lives are, economically. The factory worker’s work helps create a surplus which the medical researcher spends on booze, cigarettes and research.* The medical researcher could not save the lives you want to save without the private sector (factory worker) being prepared not to pocket all her market earnings, but rather to give a portion to people far richer than her who do “valuable”** things. As I said above, I think you’re predicating your argument on the current state of affairs not recognising the sort of value you want to reward. At the least I think you need to defend that assumption.

      *based on empirical observations of med students.
      **Valuable here is a little contestable, since not all the govt’s expenditure is valuable in any recognisable sense. Academic sociology, for instance.

      “However, there are some elements of the good that any plausible moral view agrees on – like that making people better off (by their own lights) is a good thing, and that the more you make people better off, the better. And, though calculating potential for the good is highly difficult, it’s not impossible. Economists do it all the time when recommending one policy over another (through cost-effectiveness analysis or cost-benefit analysis). There’s no reason why they couldn’t do the same for people – at least at the level of granularity of different professions.”

      As I said above – at that level of granularity we already send signals via fiscal priorities and charitable donations. But life is experienced at the individual level – for people like me that (rather than at the group level) is where the meaningful bits of life are experienced. [Quaint, I know.] As for all “plausible moral views agree that making people better off is a good thing”: yes. But making people better off encompasses a beautiful, kaleidoscopic array of skills, talents and capabilities. A librarian and a butterfly collector*** have made my life richer even than the surgeon who fixed my easy-to-dislocate shoulder (Samoan students, rugby) and smashed up finger (Bodleian Library, cricket). I’m pleased to note that governments subsidize libraries and biology, as well as the arts and medicine. Whether Nabokov should be able to jump the queue for medical treatment on account of his staggering ability to bring something of the sublime to a small-ish number of readers is both fit material for the ruminations of Charles Kinbote and something I’m happy to leave to the emergent wisdom of the democratic political process.

      ***Borges & Nabokov.

    2. Thanks Will.
      Of course, I’m perfectly willing to concede that ranking outcomes according to their expected utility is a standard decision-procedure that could be used here. There’s nothing unfair in this. Rather, my point was that, even assuming this, we lack a precise way to assign meaningful probabilities to the outcomes, at least under the descriptions you gave. Such outcomes are not finely-individuated enough to do so, and, as Dave notes, their good-making features are likely to be much more variegated and irreducible to lives-to-be-saved than you assume.

      Furthermore, assuming you’re right that the event “saving the medical researcher’s life” has significantly greater expected utility than the event “saving the factory worker’s life” with the same resources, you’re doing so mostly on the basis of statistical evidence. And allocating vital resources solely on the basis of statistical evidence strikes me as genuinely unfair. This is how a utilitarian justification for criminal policies can work (surveil, control, segregate, jail… those who are more likely to commit crimes), and this strikes me as unfair.

      This is all the more unfair in the case of healthcare as this is nothing like the way you expect to be treated when you get into a hospital. For sure, triage has been routinely performed in cases of war, and this was fine insofar as the choice was based on survival expectancy, not on what you’re instrumental to. Even so, the very principle of care allocation is at odds with your life randomly depending on who’s also awaiting care in the hospital.

      Now, I grant that, if your ranking of outcomes is correct, then a world in which you save the medical researcher has, in some sense, more good in it than a world in which you save the factory worker. Still, I can’t help finding this ugly—but I suspect this might not as rational as expected from an ideal agent.

  6. Anthony Drinkwater

    Thanks for your forthright arguments, Will. As I commented above, it’s a real pity that you really don’t extend them, as you should, to other spheres of life. For example, take education :

    First, there’s one response that is often raised and is definitely mistaken: namely that we can’t ever know whether the child of the medical researcher or the factory worker will be more successful, so we should treat them the same. The reason it’s mistaken is that we don’t need to know. We can just assign probabilities to the different possible outcomes, values to those outcomes, and then compare the expected benefit of saving each of those lives. We might not ever be certain that the medical researcher’s child be more successful than the factory worker’s offspring – but we can have good evidence that makes it extremely likely that they will.

    Second, there’s a response that’s good as far as it goes, but still ultimately mistaken. This is that the factory worker’s progeny is typically underprivileged compared to the medical researcher’s child. This is a good point because it certainly give a reason, as far as it goes, to benefit the former rather than the latter. However, the argument in favour of privileging the latter isn’t to reward the medical researcher, or because we were claiming that her child’s education is more valuable than anyone else’s. Rather, it’s because the medical researcher’s child will achieve more in the future….

    Similarly if we are benefiting the children of ‘Oxford postdocs’ rather than those of their ‘sink estate twin’ – it’s not for the postdoc’s sake that we benefit them. It’s that investing scarce educational resources in the Oxford postdoc’s child will do much more to improve the world (such as by improving the lives of those living in poverty) than wasting it on the children living in the council estate.

  7. I am genuinely surprised that I find myself disagreeing with this post, since I am an utilitarian.

    Given the premises–equal intrinsic value (assuming intrinsic trumps instrumental, otherwise, what’s the point?) and doctors’ responsibility to save as many possible lives as possible–Will Crouch’s argument holds water only in a world with Gods-Eye perspective. Of the two possible arguments Will gives against his own post, I would like to elaborate on the uncertain probability argument.

    Under the assumption that the doctor has to make a decision in a limited time between saving two people, even the doctor’s bias prevents his calculation of the exceedingly difficult probability. The American doctor would always save Mark Zuckerberg over Susanne Klatten even though Klatten heads multiple corporations that benefit people’s lifestyles and their safety, because the American doctor would have never heard of the German billionaire. In the post’s example, I similarly claim that the term ‘medical researcher’ imbues much more bias and information than a ‘factory worker.’ Even if we can calculate the probability of the two’s instrumental value using statistics, we can still have different confidence intervals (how sure we are of the probability) due to biases and unequal information. We (a doctor, with a greater bias) know much more about the life the medical researcher has lived through than the life the factory worker had, simply because the term ‘medical researcher’ is more specific than the term ‘factory worker.’ To prevent such biases, we would be better off if a machine dictated who should live and who should die, though we still have to tackle the problem of choosing what information to compare (in order to ensure equal confidence intervals).

    Even if the doctor had enough time to cancel his biases and research enough information on the two to ensure similar confidence intervals, I claim we still have a problem. Specifically because we have a better scenario: let the patients persuade each other. The patients (if they have autonomy) should have a greater say to whom the operation should go towards than the doctor should, assuming they agree. Only if the two patients can’t come to an agreement with plenty of time, should the doctor’s perspective of their instrumental value come into play. But how often does that happen?

Comments are closed.