Skip to content

Enhanced Consequentialism: Up, Up… and Away?

Last week Saturday Morning Breakfast Cereal featured a fun, and genuinely thought-provoking, cartoon. Click below to see the cartoon at FULL SIZE, then come back to hear my take on it:


Poor Superman, trapped in a spiral of consequentialist logic! If one really is as powerful as Superman, then it’s no use pleading for a bit of “me time” on the grounds that one’s individual decisions don’t make that much of a difference. For Superman, it really is true that “every second of quibbling is another dead baby.” Even if we let Superman assign a little more value to his own interests and projects (such as fighting criminals) than to those of everyone else, his preferences still completely disappear in the consequentialist calculus. He might find a life of turbine-operation incredibly miserable, but the loss of good to others if he stops is just astronomically large.

Fine, you’ll say: consequentialism makes outrageous demands of comic book characters. So what? Well, I’m about to argue, the rest of us may soon become much more like Superman in this regard – and if you’re a consequentialist, you don’t get a (moral) choice in the matter.

Start from an objection sometimes raised against consequentialist philosophers (ethicists who say that the right thing to do is whatever produces the best consequences): Hey, if you’re so dedicated to doing whatever makes the world best, how about you quit doing moral philosophy, go start a hedge fund, and give the profits to reputable charities? Surely, says the objection, the good done by the money you’d earn in such a venture far outweighs whatever good you might be doing propounding consequentialist moral theory.

This isn’t a very good objection, because it makes a really questionable empirical assumption. As brilliant as many moral philosophers are, that sort of intellectual power isn’t necessarily the same sort one needs to be a very successful hedge fund manager. A consequentialist moral philosopher might quite reasonably say: I’ve no reason to expect I’d actually produce much good in finance, but I have a reasonable expectation of producing at least some good in my present work, so the best choice is to continue.

Right. But now suppose we offer the consequentialist moral philosopher a fantastic new medical breakthrough. It’s a pill – called a Cognitive Enhancement pill – which drastically heightens one’s facility with numbers and intuitive sense of probabilities. In short (let’s suppose) this pill would make one a much better candidate to be a really successful hedge fund manager. The more plausible this empirical assumption about the effects of the pill, the more plausible the conclusion that the consequentialist philosopher is morally required to take the pill – and then leave philosophy for the City.

This conclusion generalizes wickedly. Suppose we develop another form of enhancement that makes the user much stronger and provides far greater endurance. If you’ve had this treatment and are now much stronger and hardier than ordinary folk, don’t you have a particular obligation to, say, spend your life in disaster areas, efficiently unloading aid pallets from cargo ships? And if you’re a consequentialist and you know that receiving such a treatment would leave you so much better equipped to do great good in the world, how could you justify not undertaking it? And so on – for each new potential enhancement, the consequentialist will always be confronted with the fact that, if she only took this pill (wore this brain-stimulating device, underwent this gene therapy…), the world could become a much better place.

As with Superman, the better you are at producing good in the world, the less permissible it is for you to spend your time doing things other than producing good in the world. The enhanced consequentialist must devote more and more of her life to such projects. She may have to cut ties with her family and move to where good most needs doing. She may be forced to accept that the forms of doing-good that she finds rewarding simply don’t have room when other things are demonstrably more consequential. She may have to abandon other thing she cares about (her unenhanced painting abilities, for instance) because the hours she wastes on these could have been spent producing a lot of good in the world. At extremes, she may need to do nothing but whatever drudgery will produce massive good.

Some people want to be superheroes, whatever the cost. With great power comes great responsibility, etc. Fair enough. But the consequentialist doesn’t get a choice. If enhancement technology makes a personally-unrewarding but general-good-exploding existence possible, the consequentialist may not morally opt out. Consequentialism, then, is a moral theory that may rather soon require its adherents to almost entirely sacrifice their individual preferences and aims in the pursuit of enhanced utility-maximization. Is this an argument against consequentialism? It’s hard for me to say, since I was never a consequentialist to begin with. (Or couldn’t you tell?)

In the end, the problem may cure itself: if enhancement spreads far enough, people who are presently very poor might be brought to a much more humane standard of living, and demands on the best-off will accordingly lessen. But it will be some time (if ever) before enhancement is so widely available. Meanwhile, only those who are fairly well-off will be able to afford it, and so the moral burden will fall squarely, and heavily, on their super shoulders. Will that distant enhanced utopia look back upon them with gratitude for the great sacrifices they had to make? Or will they, like Superman, end expended and unsung – a transitional utility source?

Share on

61 Comment on this post

  1. You don't actually mean 'consequentialism', do you?

    You mean utilitarianism.

    Utilitarianism is consequentialist, but not all consequentialist theories are utilitarian. I could make decisions based on what I thought would have the best outcomes for myself. That would be consequentialist, but not utilitarian.

    As for the actual argument, I think it can be countered by simply arguing that having too high expectations will put off many people from doing anything at all. Therefore, from a utilitarian point of view, having lower expectations/demands from its adherents and allowing them to have some freedom and personal pleasure will ultimately encourage more people to do *something*; the cumulative effects being much more than anyone individual can achieve on their own. Sure, perhaps in future it will be possible to enhance some people so they are twice as strong, or twice as intelligent as most 'normal' people. But they will never have the cumulative strength or cumulative intelligence/knowledge of hundreds of people.

    1. Hello Matt,

      first sorry to you (and everyone else) for my slow response- it was a very busy week.

      I believe my argument will apply to any form of maximizing consequentialism. There's even an analogous sort for egoistic consequentialism (depending on how we cash out self-interest) but I take it that this isn't a particular interesting form of consequentialism,

      I'm not sure I understand your counter-argument. Why should we assume that non-super people would be deterred from doing good by the knowledge that super people are obligated to do massively demanding things? Without that assumption, I'm not sure how that is supposed to work: there's no claim intrinsic to this argument that non-super people are obligated to do massively demanding things, so no worry about deterring them. All the weight falls on enhanced folk.

      1. Why does all the weight fall on the enhanced folk? If your use of 'enhanced' means something conceivable, i.e that 'cognitively enhanced' people are twice as intelligent as normal people, then two or three normal people working together could still be as useful as the 'cognitively enhanced' person. Even though you don't expect one normal person to individually do 'massively demanding things', if some of them can work together to achieve the same goal, then presumably they have an obligation to indeed work together. If not, why not?

        A real example: just because Bill Gates is 'financially enhanced' and has decided to give away billions, does not mean other people, who are perhaps 'only' giving away thousands should not bother. The cumulative effects of millions of normal people giving away hundreds or thousands will be more than what Bill Gates can provide on his own.

        From an economic perspective, if we were to say that 'everyone *must* give away everything they earn over £20 000', then this would be equivalent to a 100% tax above this figure, and consequently would discourage anyone from bother earning over that amount in the first place. Whereas if you let people keep more, people will want to earn more, and could thus overall end up with more wealth being generated.

        1. Hi Matt,

          it sounds as if you're advocating a form of "esoteric morality", a la Sidgwick: dampening down the publicly-expressed demands of consequentialism in order to make it more likely the people will actually do at least some of what they should.

          A distinctive feature of such a view is that it creatures two levels of morality: there is what we tell the common folk they ought to do, and there is what they <i>really</i> ought to do. The latter is recognized only by theorists.

          I take it that someone who finds the considerations raised in my post troubling will remain troubled, as the second, purely theoretical, level of morality implies that what super-people <i>really</i> ought to do is just what is suggested in the post (even if we can't say so publicly). The objection, if there is one, is to the idea that morality really can demand such things, even esoterically.

          1. No, you aren't getting it. From a utilitarian perspective you don't 'ought to', there is no duty; and creating a two leveled distinction like the one you do is a false dichotomy. There is no 'what you should do' and then 'what you should *really* do', there are varying degrees of worse, bad, good, and better, within a spectrum, a gradient. From a utilitarian perspective you have to find a balance between the good done to others and the good done to you. Choosing exclusively either of those two is a road to perdition. You can't satisfy others if you're not reasonably satisfied in life yourself, total altruism works against utilitarianism.

  2. Anthony Drinkwater

    Thanks for your interesting post, Regina. Of course, as Matt points out, you meant utilitarianism .
    But I doubt that Matt is right in his "simple" rebuttal of your argument. It seems to go a
    Step 1 Utilitarianism doesn't really say that maximum happiness is the goal, because it is clear that this would be too demanding and frankly not much fun for intelligent and powerful persons (who after all, Superman-like), control much of our destiny)
    Step 2 We need to find a way to

  3. Anthony Drinkwater

    (Sorry about the above post – my keyboard seems to be suffering from a major case of publicatio precox)

    Thanks for your interesting post, Regina. Of course, as Matt points out, you meant utilitarianism .
    But I doubt that Matt is right in his "simple" rebuttal of your argument. It seems to go as follows :

    Step 1 : Utilitarianism doesn’t really say that maximum happiness is the goal, because it is clear that this would be too demanding and frankly not much fun for intelligent and powerful persons (who after all, Superman-like), control much of our destiny)

    Step 2 : What utilitarianism really says is that we should take into account the consequences of preaching such an idealistic and irrealistic philosophy (take into account , that is, that no-one really believes in it)

    Step 3 : Infer any behaviour you like based on 1 and 2 and we’ll call it utilitarianism

    Step 4 : Utilitarianism is therefore true

    QED

    1. Hi Anthony,

      thanks for your comment.

      Although I of course agree that Matt's argument doesn't quite work, I'm not sure he's committed to your "Step 3". Can't the consequentialist argue that a fairly particular set of preferred behaviors follows from Steps 1-2 (plus some other assumptions)? For instance, empirical psychology may tell us which sorts of demands people are most willing to follow. Perhaps some particular actions-preferences then follow.

      1. Anthony Drinkwater

        Hi Regina,
        The point is, that once you accept step 2, you are in a closed, self-confirming, spiral, rather like psychoanalysts who will treat any refutation of their claims as "denial" and therefore as further "proof" of their absurd pseudo-scientific assertions.
        Utilitarianism claims that maximising net welfare is the basis of (normative/descriptive ? : it's never quite clear) ethics. When critics demonstrate that this is frankly not the case, they accept this and go one step back to "rule-utilitarianism". When this is in turn criticised , they turn to pure consequentialism – "surely" they say, "you can't really imply that actions can be judged independently of their consequences?".
        To which there are several possible replies :
        1. yes, that's exactly what I mean, and/or
        2. ethics is not about judging individual actions, but about living good lives (or at least trying, however imperfectly, to do so), and/or
        3. we should leave room in ethics for propositions that have nothing to do with consequences, such as "there is a certain decadence in contemporary journalism" (there are plenty of other such propositions, which could of course be tortuously incorporated by utilitarians, in the same way as the Vienna trick-cyclists incorporate their critics into their universal world-view ….)

        1. Hi Anthony,

          I'm not disposed to rally to the consequentialist flag, but it does seem to me that you've rendered the dialectic a bit more stark than it actually occurs. I take it that Matt aims to defend consequentialism at the level of publicly-promulgated rules, and will resist arguments at that level directly, rather than falling back to what you've called pure consequentialism.

          1. Anthony Drinkwater

            Hi Regina,
            As 87.4% of the miseries of humanity are the consequences of people defending ideological flags of one sort or another, you're, in my view, quite right not to rally to the consequentialist (or any) flag.
            And yes, my reply does render the dialectic a little "stark", but you can't expect the nuanced subtleties of contributions to Mind or the PAS.
            Will try to do better……

        2. Hi Anthony,
          Maybe we should hold the champagne. Can you at least place "utilitarianism claims" with "some utilitarians claim"? I thought I DID make it crystal clear that I subscribe to utilitarianism – and not entirely without reservation, as per my previous reply to Regina – as a normative ethical system. Indeed a lot of people confuse normative and descriptive (a more tempting thing to do if you regard normative statements as truth-apt), but I generally try not to be one of them, except when I'm REALLY upset with someone! Judging from your previous comments here it's unlike you to descend to the cheap rhetorical device of lumping your opponents all together and then caricaturing their position.

          By the way I don't really see a conflict between the idea that we should focus on living good lives rather than judging individual actions and the idea that ultimately we should be trying to maximise net welfare. Hence my suggestion that virtue ethics could be seen as an application of rule utilitarianism. To your point 1., as a utilitarian moral subjectivist I would say, "OK, whatever floats your boat, I just don't find that very responsible", and to your point 3., and in particular the statement, "there is a certain decadence in contemporary journalism, I would say, "So what?".

          1. Anthony Drinkwater

            Hi Peter,
            Sorry to have used cheap rhetorical devices. I think I'll desist from comments for a while. But I'll order the champagne nonetheless.

  4. It reminds me of Peter Railton's "Alienation, Consequentialism, and the Demands of Morality" (1984).
    Perhaps from a superhero's point of view, truly committed in his own personal life to making the world a better place, accepting the duties associated with his potential may not be incurred as an alienating process.

    I agree with Matt's argument that too highly demanding a norm might turn out to be counterproductive (Singer addresses such a concern when setting the threshold of a decent donation).

    Isn't it also likely that if a Cognitive Enhancement Pill was to be designed, a very large number of people might have to take it, that is, at least all of those who may afford it (even though, if it is to be mandatory to take it, one may assume that it'd be free) and whose condition made them appropriate subjects of such obligations (i.e. the pill would significantly affect their capacities). In order to be effective, the pill must fit the greatest number of people (without losing intrinsic efficiency). If so, then it's also likely that the individual demands would be lessened. And we'd just be dealing with another taxation system, not so demanding, quite effective, and (partly and hopefully) directed toward benefiting those in need.

    In short, it seems your argument does not offer a version of maximizing consequentialism (say, utilitarianism) that such a consequentialist would endorse. Hence, the objection does not obtain. It rather seems to pertain to those versions which warrant sacrificing one person for the sake of others. But even in those cases, the sacrifice is not phrased as an obligation incurred by the person to be sacrificed. Suppose you think there's a right to self-defense, or each has a legitimate interest in preserving one's own life against external threats, then one may not demand that someone fulfill the obligation in a proper sense (see Thomson's Defense of Abortion).

    1. Hi Nicolas,

      I take it that Superman, in the comic, is indeed truly committed to making the world a better place, which is why he goes along with the demands placed on him. (He's Superman – they certainly can't force him to do anything!) But he nevertheless doesn't seem to find his task particularly rewarding.

      You're right, of course, that if large numbers of people employed enhancement technology, then the individual burden would correspondingly diminish. My assumption is that this wide availability would take time, and during the interval between initial availability of the technology and its widespread employment, the concerns raised here would apply.

  5. Patrick Brinich-Langlois

    It's not obvious to me that cognitive enhancement would make utilitarianism any more demanding. In your example, taking a pill would turn an academic into a brilliant hedge-fund manager. But all the other hedge-fund managers could take the same pill, and they would turn into <em>extremely</em> brilliant hedge-fund managers. No one in her right mind would keep money with a brilliant hedge-fund manager if extremely brilliant hedge-fund managers were available.

    I understand that utilitarianism's demands increase to some degree with one's ability to do good. The moral obligations of the very poor are less demanding than those of the rich. But most people in developed countries already have superpowers of a sort. A person living on an average income in the US or UK could save the lives of at least a couple of dozen people a year. And if you give equal weight to future generations, an ordinary person donating to an existential-risk charity could conceivably save millions of lives (in terms of expected value). Whether you can save a couple of people a couple of quadrillion people a month makes little difference to what you should do in your free time. Utilitarianism says you should go all out in either case.

    I find the comic amusing, but it seems to me (a biased utilitarian) to be just as much an indictment of undemanding systems of ethics. How could you value one person's free time more than the lives of millions of others? I do hope that people would be more appreciative of Superman's efforts.

    1. Hi Patrick,

      Your response might apply in competitive instances, such as for hedge fund managers. (Although it assumes that enhancement's effective depends on prior abilities, and wouldn't simply push everyone, regardless of prior ability, toward a single ceiling.) But it wouldn't apply to other instances, such as the super-strong person newly obligated to spend her time unloading aid pallets.

      You're right, of course, so say that "most people in developed countries already have superpowers of a sort". I suppose the difference is supposed to be that right now, people can plausibly say that the difference they might each make is actually very small, in the grand scheme of things. Perhaps you disagree that this is plausible. But notice that it becomes rapidly less and less plausible as people's ability to do good is enhanced. In other words, the "demandingness" worries broached by my post were certainly already waiting there for consequentialism – they just get enhanced.

  6. Michelle Hutchinson

    Maybe superman should be a scalar consequentialist – he could agree that the best thing for him to do would be to crank the wheel, but not that that makes every other possible action wrong. Similarly the person faced with the possibility of taking the enhancement shouldn't think in binary terms of the enhancement being the right thing to do and all others wrong, but rather understand that there's a spectrum of better and worse actions. So the consequentialist doesn't get a choice as to what the best thing to do is, but that doesn't mean that they aren't doing some good even if they fail to do the most good.

    1. Hi Michelle,

      that definitely seems like a helpful suggestion. But it puts me in my usual mind toward such scalar sorts of consequentialism. If you accept that Action A is a good action, but that Action B is a much, much, much, much, much better action, what justification could you have for not doing Action B? Or, to put the point more bluntly: what justification could you have that doesn't deviate from consequentialism?

  7. I'm with Michelle on this. It's one thing to determine what "the best thing to do" is, and another thing to be willing to do it. I still don't think non-consequentialist moral philosophies really make any sense (they just seem irresponsible to me), but that doesn't mean we should always expect everyone to do "the right thing" in all circumstances.

    I think I lie somewhere between Matt and Anthony with regard to Matt's reubuttal of Regina's argument. I see Anthony's point that it seems to be potentially a justification for just about anything, but I think it has potential to be developed more rigourously. There do seem to be good (rule) utilitarian reasons to adopt a rule that nobody should be expected, or feel obliged, to behave like the Superman of the cartoon (which reminded me a lot of the film Les Triplettes de Belville). I genuinely believe that there is a moral hazard in trying to be too "good" in a naively utilitarian sense: apart from anything else you set a guilt-inducing and therefore ultimately unhelpful example for others. But the keyword in that last sentence is "naively". The problem is not with utilitarianism as such, but with the naive conclusions one might be tempted to draw from it.

    1. Hi Peter,

      of course we don't want to be naive about our consequentialism! But I'm not sure how your response here is supposed to run. We're talking about a certain segment of the population: those who could utilize enhancement to become vastly better at doing good. First, why should we suppose that their doing so much more good would set a discouraging example? After all, ordinary folk will recognize that they (ordinary folk) simply aren't capable of doing so much good – it's not something for them to feel guilty about, because it's not something they could do anyway. (We don't feel guilty about not flying around like Superman, because we know we just can't do it.) Second, even if the actions of enhanced folk DID have somewhat discouraging consequences for others, it will still be a matter of weighing costs and benefits. As long as the much greater good done by the enhanced overwhelms the reduction in good done by the discouraged ordinary folk, the consequentialist calculus still endorses great demands on the enhanced. I take it that someone who finds the concerns raised by this post (perhaps you aren't such a person) won't be reassured by the suggestion that if the math just happens to run one way the problem disappears.

      1. See my reply to Anthony had Dave below. I'm actually ambivalent as to how far moral philosophers should go in trying to define how much "good" people should be trying to do. I've been making the case that perhaps we shouldn't, and I think your reply above illustrates why. If we DO attempt to pronounce on this and make proposals that are supposed to be universally applicable, then we indeed need to ask ourselves to what extent we should be bothered by the possibility that a utilitarian (or other consequentialist) calculus might impose unreasonable demands on some individuals. I just don't think there is a right answer to this question, and this is why I'm very much a moral subjectivist first and only a utilitarian second. Certainly if I was the enhanced superman in question (but with roughly my current preferences and sensibilities) then I would not ge willing to put myself through that unless forced to do so by something more coercive than utilitarian calculus. But I might be prepared to admit that it would be "better", in a utilitarian sense, if I did.

        1. Hi Peter,

          in that case, it sounds to me like you were already on board with some of the concerns animating this post. If you treat consequentialism as a heuristic that "gives out" beyond a certain point, then you're already sceptical of consequentialism as such. Since my goal wasn't to refute consequentialism (there certainly isn't an argument strong enough to do that here!) but only to stir up some worries about what consequentialism apparently entails, it looks like you're already convinced!

          1. Hi Regina,
            I think you're basically right: I was already on board with perhaps all of the concerns animating your post. Perhaps just not the title!! But perhaps I was also suspecting, and still suspect, that these concerns are also for you a motivation for your non-acceptance of consequentialism (and if it isn't, then what is?). While you're suggesting that Matt is proposing a form of esoteric morality (the superheroes ought to behave like that but we can't say so), you seem to be suggesting that out of empathy for the superheroes we shouldn't even THINK they ought to. Actually I would tend to agree, but more on moral subjectivist grounds than anti-consequentialist grounds. Better not to use the word "ought" in this context, it's too judgemental. Let's rather say that we would find such behaviour on the part of the supermen to be most admirable, irrespective of whether they later get the credit they deserve.

            But as noted in my first comment I think even this position can be criticised from within consequentialism on the grounds that such behaviour on the part of the supermen could be seen as setting a guilt-inducing and therefore ultimately unhelpful example for others ("does that mean we all have to be that good?"). The math really might run that way, and while I agree that this in itself is not sufficient to fully address your concerns (for that we'd have to be pretty damn sure the math did run that way, and how can one ever be that sure about such things?), I also suspect that some people reject consequentialism because it hasn't occurred to them that it can and in principle does incorporate such considerations.

            By the way, is part of your concern that "consequentialist logic" might lead people to *coerce* the supermen into becoming transitional utility sources? This would then go to Anthony's "joke" post, which personally I think actually *deserves* to be taken seriously. If empirical evidence was found to suggest that commitment to consequentialism (or more narrowly utilitarianism) reduces net welfare then consequentialists/utilitarians really would have a problem. And in some ways I think this might well be true. From a consequentialist perspective it is almost certainly a good thing that utilitarianism has come under attack and alternative frameworks such as virtue ethics have arisen. But maybe then, as I've suggested, such other frameworks can be incorporated into a new, revised, better version of consequentialism, which actually increases welfare. Which would be kind of cool…

  8. I like Regina's post on this. The problem I've always had with a lot of normative ethical theories is that they're good at mapping from principles to required actions, but they're not so good at bounding those requirements in terms that are meaningful for real people. I think to lead a full life worth living you want both to experience personal joy or happiness (or whatever your favourite term may be) as well as do (some) good. I have a problem with the open-endedness of many normative ethical theories just because their demands don't seem finite: doing more good is always better, and there is no time off for good behaviour.** I think ethical theories need some sort of oppositional principle to construct a trade-off which would limit the amount of good people were reasonably expected to do.*** I'd be very grateful if anyone can point me towards anything interesting on that.

    **It's like when you set up a standing order to a charity and they see this as a green light to bug you even more. I really hate that.
    ***Personally I think one reason why economics tends to float to the top of the social sciences in policy conversations is because it formalises trade-offs (however imperfectly).

    1. Anthony Drinkwater

      But Dave, I agree.
      If you're looking for an oppositional principle, perhaps Luke 9: 49-52 ? But I doubt that it'll help too much. Perhaps other more well-read readers will have other suggestions….
      Could it be that normative ethics gives imposssible but nonetheless necessary requirements and that we mortals just do the best we can ?

      1. Hi Anthony, I'm not how well "impossible but […] necessary requirements" play in, say, Whitehall.** Or at the Aylesbury Rotary Club or other such important venues for the consideration of weighty moral questions.

        I fear that if philosophers say "job done" the moment they've mapped principles to requirements without some sort of scaling function then they leave the important bits – like how *much* to do – to others, which usually means economists (since they *do* provide trade-offs). Maybe that's ok with you guys, since you have tutorials to give and exams to set, but I as a consumer rather than producer of philosophical advice, find it unsatisfactory. It renders philosophical analogous to navigating by a dodgy satnav: it gives me a direction to go in but no idea of what distance to travel. Just because they're so open ended I see a lot of ethical theories as being completely unsustainable, since they lack limits. I'd like to see something like a liberty principle that has to be balanced against the moral yoke you're imposing so I can answer questions like "what is the benefit I get back in return for subsuming my moral choice to a priciple or theory of someone else's design?" Or "what do I get out of this flavour of doing good such that it can work sustainably for me?"

        **Though on the two occasions I've worked in the Civil Service I've found myself reading a lot of Kafka; it's not as though Whitehall is exactly deaf to either the aesthetic delights or the spiritual comforts of absurdity, but it tends to internalise these rather than reflect them in policy…

        PS, since I'm on here, does anyone know of a paper comparing the intergenerational implications of the current overconsumption of antibiotics to current overconsumption of fossil carbon fuels?

        1. I certainly subscribe to the notion of achieving personal joy and happiness while at the same time doing some good. That's basically what I want for myself. But I'm not sure we should be looking to (moral) philosophy to tell us how much good to do. What it can do is to help us refine our understanding of what doing good actually means, and what it might involve. Then it's just up to us to decide how far we want to go.

          The issue of why economics tends to float to the top of policy discussions is an interesting one. I'm not sure it's really because it formalises trade-offs, but it's an interesting theory. Economics also has its limitations however, not least precisely because it sometimes leads people to erroneously believe that there is an objective (market-based) way to quantify the value of things (and thus make trade-offs), when in truth the market is just what happens when a lot of people make subjective choices. Nothing objective about it at all, which is why we have booms, busts, bubbles and bizarre fashions.

          1. Peter wrote: "I’m not sure we should be looking to (moral) philosophy to tell us how much good to do."

            If philosophers can't actually provide this sort of guidance then (1) who can? (2) doesn't that seriously undermine your relevance to society?
            As I said above, I see the "how much" issue as being one of two components of a vector for practical moral guidance.

            I used to work at a Treasury, next to a very clever guy who routinely claimed that he as an economist was simply a dry, positive analyst, "a behavioural scientist". But then he'd grab his coat, pop over the road and tell the Minister of Finance that we should do X. While i don't think philosophers would conflate the normative and the positive in this way, I think it's a mistake of similar magnitude for academic philosophers to systematically prefer the satisfying sterility of working out what goodness might be to the messy practicalities of actually suggesting what sorts of balances between the interests of self and other might be reasonable. [I could suggest another analogy here with economics – academic economists are far more impressed by an analytic (therefore general) solution to a highly abstracted problem than they are by a numerical solution to a more accurately described problem (which is of course less general).]

            Don't you think a retreat into abstract generalities and formalisms (however intellectually satisfying because of their "neatness") plays to the sorts of people who argue that the humanities are a bit of an irrelevance, the kind of area in which the public spends money but receives little in return?

        2. Anthony Drinkwater

          Hi Dave,
          I agree entirely with Peter (it had to happen sometime)
          A couple of brief points :
          1. Sometimes I wonder who does, consciously, make ethical decisions based on systems or philosophies. I doubt that Whitehall or even the Aylesbury Rotarians do. It could even be maintained that moral philosophy (fascinating as it is) is always rear-view mirror or speculative stuff – commenting on what might/should have been, or what might possibly be the case, rather than practically helping us lead better lives, or make better decisions. (I sense that this is what behind Francesca’s comment : https://blog.practicalethics.ox.ac.uk/2011/07/when-its-unethical-to-be-a-well-published-academic/#comment-4146 )
          2. On counsels of perfection, if you don’t object to a far-fetched analogy, it would be with music. Only a few understand the utter impossibility of really playing in tune, and debating this issue has rendered men and women mad for centuries. What do musicians do ? They can’t agonise every time whether they should be playing a third pure, or Pythagorean, or Pythagorean plus for emotional intensity, or tempered (and if so, which temperament ?). They just do the best they can . This is perhaps fairly typical of the human condition……

          1. Given that so many MPs and others involved in politics study PPE at university, I'm positive that the ideas of some philosophers, such as Rawls and Nozick, are having a substantial influence on government policy. As an example, I find it hard to believe that, given the current financial problems, the UK government would have spent anywhere near as much on international development and aid if it wasn't for a sense of duty to help the worst off. I doubt there would have been an intervention in Libya, or any attempt to avert climate change, if Western politicians were solely trying to benefit themselves or their own country.

            As someone who has often struggled over guitar and violin intonations, I love your analogy.

          2. @Anthony..well this is definitely a champagne moment!!!

            With regard to Dave's point about the relevance of moral philosophy, I think there must be some merit in defining what "good" might mean even if one refrains from pronouncing on how "good" people should try to be. I think this is also related to Regina's reply to my earlier comment, however, to which I shall now try to respond.

          3. Now I've replied to Regina I want to say more about this issue of the relevance (or lack of it) of moral philosophy. Personally I think it CAN help us practically to lead better lives, not least by exposing the various fallacies and logical inconsistencies that moral discourse (including much political debate) often involves. For example, I believe that moral realism is a fallacy, an illusion lacking any shred of evidence, and I also have an empirical belief that the more people realize this the happier we will all be. I think the practice of moral philosophy, by which I basically mean application of dry, objective analysis to moral issues, makes this more likely, precisely because it is when we strip away the emotion that we realize that moral realisms have no basis in evidence, and that normative statements express nothing more or less than the values of the person making the statement.

            Where I do agree with Dave, though, is that moral philosophers (particularly those of the moral realist persuasion) may be at risk of conflating analysis with advocacy, as his economist colleague at the Treasury was doing. It's not that philosophers and social scientists (including economists) shouldn't engage in advocacy, but they should be aware that they are no longer doing philosophy or science respectively. A scientist can say, "If you do X then Y will result," and a philosopher can say whether a moral argument is logically coherent, but neither can say entirely as scientists or philosophers whether or not X is the right thing to do. For that you need to know what it is you want to achieve, and that is neither a scientific nor a philosophical question. It's simply a choice.

    2. Hi Dave,

      if you haven't read it already, you might enjoy Susan Wolf's paper “Moral Saints” which I had in the back of my mind while writing this post. Wolf argues that traditional normative theories (both deontological and consequentialist) demand moral goodness to a degree that crowds out other sources of value in life.

      One normative theory that is supposed to avoid this is virtue ethics. On the traditional Aristotelian conception, doing-good-things-for-others is just one of many virtues we should aim to develop, and a good person develops these virtues in concert with one another. That doesn't give us an immediate or obvious limiting principle on the exercise of any particular virtue, but it does bring to the forefront the need to construe doing-good within some sort of limitations.

      1. The problem I have with this kind of argument is that if we really need to construe doing-good within some sort of limitations, then it is presumably because we think it would be better to do so. But why? Why does it matter if the demand for moral goodness crowds out other sources of value in life? What is it about this prospect that we dislike? Is it because we think people are less likely to be happy as a result? If so, then a sound (rule) utilitarian approach, far from crowding outnother sources of value, will encourage them. The maths just HAS to run that way, assuming that happiness is what we are trying to maximize. And if it doesn't make people less happy, then I really don't see why it matters.

  9. Peter wrote: "if we really need to construe doing-good within some sort of limitations, then it is presumably because we think it would be better to do so. But why? Why does it matter if the demand for moral goodness crowds out other sources of value in life?"

    Because moral goodness is only one flavour of finery, and is not usually the flavour that most makes our lives worth living. Other bits of finery (more personally rewarding, perhaps) might include aestethic bliss, deep and prolonged emotional fulfilment, and so on. In many instances these things are essentially orthogonal to moral goodness in terms of value – I can experience sorts of aesthetic bliss reading various authors without affecting the sum total of suffering in the world – but are in competition with moral goodness in terms of time: I can read Nabokov or work to improve the lot of the bottom billion (say) either by collecting money or by working harder so I can donate more myself. I have to make a choice between pursuing these essentially incommensurate values (aesthetic bliss and moral goodness), and I find it hard to get good guidance on how to think about that choice.

    It isn't obvious to me that utilitarianism is particularly promising in terms of enhancing "other sources of value". Because people work from welfare functions that have something resembling a singularity at zero income, the vital interests of the very poor pretty much always dominate utilitarian calculations. The place where the next dollar buys the most welfare is somewhere among the "bottom billion." Given the uses to which that dollar will be put, it doesn't seem obvious to me that this approach to goodness will do anything for those other sources of value. [I doubt even whether utilitarianism will do anything to reduce overall misery or improve welfare, since one of the main features of post-war extreme poverty has been staggeringly high population growth – it may be that an emergent feature of utilitarian approaches to poverty is to create ever more miserable people, ever more in need, which I suppose brings us back to the desperation of Regina's Superman.]

    1. "I doubt even whether utilitarianism will do anything to reduce overall misery or improve welfare, since one of the main features of post-war extreme poverty has been staggeringly high population growth – it may be that an emergent feature of utilitarian approaches to poverty is to create ever more miserable people, ever more in need"

      Hi Dave. Could you please explain why you think this?

      1. Hi Matt.

        Sure – my thinking is a bit like this. Imagine a social welfare function that is logarithmic in income (and consumption). Then the derivative of utility with respect to income goes as 1/income, so your highest marginal utility associated with the next dollar you spend on doing good is at the lowest incomes. With a finite amount to spend the most good you can do is by improving the lot of the very poorest. Now assume that the representative agent who is receiving this dollar would like to invest it in their own future well-being. Imagine they could do this by putting it in a bank account, pension fund, or other financial product that gives them some buffer against trouble (famine, incapacity). Or they could "invest" by having a large family, since adult children also offer a buffer against these sorts of troubles. It may be the case that the agent prefers the former strategy (call it B for banking) over the latter (C for children), but B is reliant on the integrity of the relevant financial and government institutions. In the poorest countries of the world, the expected return from strategy B is low, since expectations regarding this integrity are very low. Hence even if you think B is preferable to C in a world where both have the same probability of failure, you might well strongly prefer C in a world where the odds of B failing are very high, and the odds of C failing are moderate. The children created by C may live at similar levels of consumption, income and well-being as their parents. The agent/couple making the decision may be better off from C rather than from B, but the number of people in the category of those most urgently in need has actually grown, rather than reduced, through your intervention. And this strikes me as a reasonable description of the way aid/development has worked over my lifetime. [I note that of the 50 countries with the highest total fertility rates, 45 are on Paul Collier's list of the countries that make up the bottom billion.]

        Now your rule-utilitarian could of course specify a procreation policy, but this strikes me as thorny. [Imagine you tied your donation to the decision to have a family of fewer than 2 children; you'd then create the incentives for the agent to make decision B, but you'd be failing to help those who make decision C, who are likely to remain the very poorest, hence the most in need etc.] More promising might be a rule that invests in improving the odds of the viability of B, but this, too, might be more difficult than you think given corruption and other governance difficulties. [ie it may be that the money you have to spend to make the expected returns from B dominate those from C may be high enough to make that, too, inefficient.]

        That's kind of how I've been thinking about it, anyway. The intervention *does* improve the lot of the agent, but it creates more agents who are in an identical situation to the agent's initial position.

        1. Thanks for the explanation. You say:

          "And this strikes me as a reasonable description of the way aid/development has worked over my lifetime."

          Are you saying you think real aid/development policies thus far have been primarily utilitarian? If not, then I don't understand why you're referring to the real world. If so, then surely you need to take a closer look at the evidence:
          (a) The population growth rate is declining
          http://en.wikipedia.org/wiki/File:World_population_increase_history.svg

          (b) Global poverty is decreasing, both as a percentage and in real terms
          http://go.worldbank.org/45FS30HBF0

          1. Hi Matt,

            The population growth rate in the poorest countries is nowhere near declining. Look at http://esa.un.org/unpd/wpp/Other-Information/Press_Release_WPP2010.pdf, which is the press release accompanying the recent revision of the UN Population Division figures. Have a look at the map of high fertility countries: it's not quite identical to Collier's list of bottom billion countries. Then have a look at the three curves in figure 1: population in low fertility countries is clearly nearing its maximum; population in medium fertility countries is (just) past the point of inflexion (so growth in those countries is slowing); but that (inflexion) point has not yet been reached in the high fertility countries.

            Overall poverty is decreasing because of economic growth and development in lots of formerly poor countries. But there remains a core of countries where poverty reduction isn't going as planned (the second sentence of the abstract of the paper you cite says "Extreme poverty-as judged by what "poverty" means in the world's poorest countries-is found to be more pervasive than we thought. ") It was to those countries I was referring; global aggregates are somewhat beside the point if poverty is essentially a localised phenomenon.

            Arguments to give aid to folks in these countries come in a variety of flavours, of course, but utilitarian arguments certainly figure prominently in justifications for aid.

          2. Hi Dave

            The map shows high fertility countries, most of which are in Africa. But it does not show fertility trends. Nearly every country in Africa has a declining birth rate, though of course many are still high.

            If you go to http://www.census.gov/population/international/data/idb/informationGateway.php and select 'region search', you can then select to see fertility rates across every country in Africa (or wherever you choose) over whichever time scale you choose (though for Africa there's not much data pre-~1970). Nearly all African countries have a declining fertility rate. They would perhaps be declining faster if (a) the Catholic Church wasn't so strongly opposed to contraception (b) contraception was more widely promoted, available and accessible (c)women's rights and empowerment were promoted through education. I certainly don't think it necessarily comes down to economic calculations about optimum family size.

            Obviously progress in Africa hasn't been as fast as expected, but that's because of the impact of things like civil war, government corruption and instability, and AIDS.

    2. Dave, I think we may he arguing at cross-purposes here. To me, utilitarianism is requires us to do whatever (to the best of our knowledge) will maximize net welfare (i.e. welfare minus suffering) over whatever timescale we can reasonably make predictions. The kind of utilitarian approaches that create ever more miserable people, ever more in need, are thus exactly the kind of thing I mean when I refer to "naive utilitarianism". It's misapplied utilitarianism.

      I'm also not sure I buy the "singularity at zero income" argument. What's the empirical evidence for this? I know that happiness tends to be correlated with wealth up to a certain point and then flattens, but there are also many other factors at play, such as relative wealth, quality of one's relationships, coherence between personality, behaviour and values, not to mention our genetic predisposition to be happy or unhappy.

      It seems to me that at some level people who oppose utilitarianism do so because they suspect it has perverse effects and leads people to be more miserable and less happy, basically for the kinds of reasons you sight. But then the problem really isn't utilitarianism as such, it's the way it's being applied. Alternatively, we may oppose utilitarianism because we just don't want to be that good. Let the poor go hang, I want to enjoy myself. And that's your choice (and mine). But that doesn't mean we should make a virtue out of it.

      1. Hi Peter,

        The "singularity" thing is based on the idea (common enough in welfare economics) that Utility~ln(income). [Actually consumption rather than income, but for the poorest folks in the world the two are very similar…] All I mean is that things (utility and marginal utility) get very steep near the origin.

        I'm actually really conflicted about ethical theories. I'm quite situationalist, which I know is not usually regarded to be a good thing… but there are real world conditions in which i think various flavours of utilitarianism are a good way to go; eg fiscal policy in a small, wealthy, well-governed country (Denmark, Finland, etc). But I think it faces huge problems when you have genuinely enormous inequality, and when you simply scale up in terms of population (ie numbers of people governed). At both the small and the large scales, i often find virtue ethics more appealing, even if I find utilitarianism a more useful guide in the middle.

        1. Thanks Dave. I don't know enough about welfare economics to know to what extent the logarithmic relationship between utility and income/consumption is empirically based, but I would hazard a guess that it works quite well *except* at the origin. I can well believe that it fits the data well over a certain range, but utility can't be negatively infinite at zero income, we just aren't capable of experiencing infinite pain. (For that we really WOULD need to be enhanced!)

          But otherwise I take your point, and I can equally well believe that in a practical sense your "situationalist" perspective may work well. What I'm defending here is the conceptual merit of (rule) utilitarianism, which could well disaggregate into something closely resembling your framework when applied in practice. I'll try to elucidate this now by replying to Anthony and Regina above.

  10. Anthony Drinkwater

    I used to be a utilitarian, but not any more : virtue ethics is much more effective in maximising net welfare.

    1. Anthony, do you have any empirical evidence for this? Doesn't it depend on how utilitarianism is applied? To be honest I don't know much about virtue ethics, perhaps I should read up on it, but my current impression is that it can best be seen as an application of rule utilitarianism. In other words the "virtues" considered in virtue ethics would be seen as "rules" chosen precisely because of their effectiveness in maximising net welfare, with the latter remaining the overarching goal (which as far as I can see is the only absolute requirement of utilitarianism). What's not to like about that?

    2. I would echo what Peter has said. Do you have any evidence to support your claim?

      My problem with virtue ethics is how do you determine what traits are virtuous without looking at the outcomes involved? In which case, encouraging virtuous traits simply becomes a form of (rule-based) consequentialism, as Peter points out.

  11. Anthony Drinkwater

    Do I really need evidence to make a joke (sorry, I should rather call it a paradox) ?
    The serious point underlying it (all jokes should have a serious underlying point, after all) is that utilitarians (and perhaps other consequentialists also) are so seduced by their arbitrary choice of the maximisation of outcomes that every other alternative seems nonsensical. So they don't see the joke.

    Jokes explained are wet squibs, but I'll take the risk :
    If I believe in virtue ethics, I do not use the calculation of the consequences of actions as my rationale, neither directly, nor indirectly (via "rule utilitarianist"-type reasoning, for example).
    Therefore if I believe in virtue ethics, I will never claim that it is more worthy of belief because it leads to an optimal net welfare.

    (I'll leave aside the justifications for virtue ethics : if you're interested, Aristotle didn't do too bad a job, (and even fourth-rate novelist but at least second-rate philosopher Iris Murdoch did pretty well in "The sovereignty of good")

    It is, of course, open to utilitarians to try to justify their attachment to the maximisation of utility to justify a normative or descriptive ethic, but I remain, for the moment, unconvinced.

    1. In your "joke" post, you didn't claim that virtue ethics is more *worthy of belief* because it leads to maximal net welfare. You simply asserted that it *did* lead to maximal net welfare, regardless of whether you find that valuable or not.

    2. In a sense any choice is arbitrary of course: that's why I consider myself a moral subjectivist. By the way I started reading "The sovereignty of good" and it actually gave me a headache, there were so many arbitrary and unfounded claims. At least utilitarians just make the one, and the rest is nicely neat and logical. Perhaps that's what attracts me about it (after all I'm a mathematician by nature and training).

      So no I will not justify my attachment to maximisation of utility as either a normative or descriptive ethic: I do not regard it as descriptive, and my adoption of it as a normative ethic is a choice I make, not something I believe I can derive from any "higher" or more general principle. What does get me going, though, is unfounded attacks on utilitarianism on the grounds that it leads to worse results, and ultimately that's why I didn't get the joke (oh and because we're communicating via blog post which means I was missing all the non-verbal cues). The fact is that people often do, WITHOUT any sense of irony, attack consequentialism on consequentialist grounds, which is not only logically incoherent ("paradoxical" is a euphemism here) but also misses its target, because such argument can only ever refute misapplications of utilitarianism (or consequentialism more generally), not the concept itself.

      1. Just to nuance the above after some reflection: a *successful* consequentialist attack on consequentialism would of course have value as a proof by contradiction (reductio ad absurdum), and I certainly am interested in empirical evidence that suggests that a commitment to consequentialism actually has a negative effect on human welfare. But while genuine paradoxes (i.e. those that can't be resolved easily) are interesting, to say that consequentialism/utilitarianism tends to reduce human welfare smacks of defeatism. It's like when people say the only way to be happy is to forget about trying to be happy. As a tactical ploy it might have value, but ultimately you're still trying to be happy, you're just getting cleverer about how you do it. That's how we should see virtue ethics: as a tactical ploy helping us to correct certain distortions/shortcomings in our current approach to utilitarianism, but all ultimately in the service of maximising net welfare. It's not that other approaches are nonsensical, but they seem irresponsible. After all, isn't failure to consider the consequences of one's actions the very definition of irresponsibility?

  12. Anthony Drinkwater

    Hi Matt
    The post was no more complicated than "I used to be a solipsist, but gave up because I could never convince anybody else of the truth of my conviction".
    Contradiction, paradox : something has to give. For you it's virtue ethics, for me it's utilitarianism
    (But probably I'm just a lousy story-teller…..)

Comments are closed.