Skip to content

Practical Ethics Given Moral Uncertainty

Practical ethics aims to offer advice to decision-makers embedded in the real world.  In order to make the advice practical, it typically takes empirical uncertainty into account.  For example, we don’t currently know exactly to what extent the earth’s temperature will rise, if we are to continue to emit CO2 at the rate we have been emitting so far.  The temperature rise might be small, in which case the consequences would not be dire.  Or the temperature rise might be very great, in which case the consequences could be catastrophic.  To what extent we ought to mitigate our CO2 emissions depends crucially on this factual question.  But it’s of course not true that we are unable to offer any practical advice in absence of knowledge concerning this factual question. It’s just that our advice will concern what one ought to do in light of uncertainty about the facts.

 

But if practical ethics should take empirical uncertainty into account, surely it should take moral uncertainty into account as well.  In many situations, we don’t know all the moral facts.  I think it is fair to say, for example, that we don’t currently know exactly how to weigh the interests of future generations against the interests of current generations.  But this issue is just as relevant to the question of how one ought to act in response to climate change as is the issue of expected temperature rise.  If the ethics of climate change offers advice about how best to act given empirical uncertainty concerning global temperature rise, it should also offer advice about how best to act, given uncertainty concerning the value of future generations.

 

Cases such as the above aren’t rare.  Given the existence of widespread disagreement within ethics, and given the difficulty of the subject-matter, we would be overconfident if we were to claim to be 100% certain in our favoured moral view, especially when it comes to the difficult issues that ethicists often discuss.

 

So we need to have an account of how one ought to act under moral uncertainty.  The standard account of making decisions under uncertainty is that you ought to maximise expected value: look at all hypotheses in which you have some degree of belief, work out the likelihood of each hypothesis, work out how much value would be at stake if that hypothesis were true, and then trade off the probability of a hypothesis’ being true against how much would be at stake, if it were true.  One implication of maximizing expected value is that sometimes one should refrain from a course of action, not on the basis that it will probably be a bad thing to do, but rather because there is a reasonable chance that it will be a bad thing to do, and that, if it’s bad thing to do, then it’s really bad.  So, for example, you ought not to speed round blind corners: the reason why isn’t because it’s likely that you will run someone over if you do so.  Rather, the reason is that there’s some chance that you will – and it would be seriously bad if you did.

 

With this on board, let’s think about the practical implications of maximising expected value under moral uncertainty.   It seems that the implications are pretty clear in a number of cases. Here are a few.

 

1.

One might think it more likely than not that it’s not wrong to kill animals for food.  But one shouldn’t be certain that it’s not wrong.  And, if it is wrong, then it’s seriously wrong – in the same ballpark as murder.  So, in killing an animal, one risks performing a major moral wrong, without any correspondingly great potential moral upside.  This would be morally reckless.  So one ought not to kill animals for food.

 

2.

One might think it more likely than not that it’s not wrong to have an abortion, for reasons of convenience.  But one shouldn’t be certain that it’s not wrong.  And, if it is wrong, then it’s seriously wrong – in the same ballpark as murder.  So, in having an abortion for convenience, one risks performing a major moral wrong, without any correspondingly great potential moral upside.  This would be morally reckless.  So one ought not to have an abortion for reasons of convenience.

 

3.

One might think it more likely than not that it’s not wrong to spend money on luxuries, rather than giving it to fight extreme poverty.  But one shouldn’t be certain that it’s not wrong.  And, if it is wrong, then it’s seriously wrong – in the same ballpark as walking past a child drowning in a shallow pond.  So, in spending money on luxuries, one risks performing a major moral wrong, without any correspondingly great potential moral upside.  This would be morally reckless.  So one ought not to spend money on luxuries rather than giving that money to fight poverty.

 

Share on

19 Comment on this post

  1. Pascal's wager lives to fight another day… (though this smacks rather more of Calvin than Pascal, to me). You do seem certain about your rule: maximising expected utility. I don't buy that, since I think it leads to moral mistakes (I like hybrids between virtue ethics and consequentialism, with maybe a little bit of deontological reasoning thrown in to season the mix).

    Also – though I appreciate you're just trying to make a point – there's a false dichotomy which comes through in your third example, where you make it sound as though the options are either to max out on luxuries or to max out on fighting extreme poverty. It's not clear that either corner solution maximises human welfare/expected utility, even assuming for a moment that was a sane thing for individuals to do.

    A third thing is that I don't see any reason for me – an historically, geographically and culturally positioned entity – to assign equal moral weights across the world's population**, irrespective of my relationships with them. Very simply put – let's take your dichotomy reasonably literally – I think it would make me a morally reprehensible father: "daddy's too busy caring about some Tanzanian child he's never met (and never going to) to buy you a toy, and don't worry about the fact that all your friends have toys… just bask in the warm glow of *my* moral beneficience and be glad you have a morally superior father…" Fortunately, I don't believe I'm morally required to be that guy.

    **Especially in view of the fact that virtually no one else does this (I think reciprocity is important).

  2. Sorry – just wanted to clarify this: "It’s not clear that either corner solution maximises human welfare/expected utility." Obvious response is to scale your effort so that you *do* maximise utility, but this is actually pretty tricky in the presence of uncertainty about how to (measure and) deliver benefits and – especially – in view of the incentive effects of this sort of approach. [People don't work as hard if they/their families don't get to internalise the benefits. For obvious (and rational, if you are anything other than a very specific sort of consequentialist) reasons…]

  3. Patrick Brinich-Langlois

    Many years ago I came across a passage in one of the best-selling books by the right-wing radio host Rush Limbaugh. (Having been raised by liberal parents, I was trying to be open-minded.) Two passages made a big impression on me. The first was his claim that there are more trees in the US now than there were before European settlers came (I don't think that's true). The other one was a thought experiment: Imagine that while you are hunting you hear a rustle in the brush. It is mostly likely a deer, but you're not sure: there's a small chance that it's a human. Should you shoot? "No," I thought. "Killing animals for sport is wrong!"

    His analogy was to abortion: given the widespread disagreement on the issue, we should take the cautious approach and avoid abortion whenever possible. It's hard to say what's wrong with this reasoning, unless there is some systematic bias or major oversight that leads people to believe that abortion is wrong that abortion opponents aren't aware of. I think this might be the case with regard to abortion—but I'm not sure!

    1. To reduce abortion, society can increase heath education greatly and provide easy access to birth control. Social messaging that reminds sexually active people that they can readily avoid pregnancy by use of any of a number of methods will lead to much less abortion.

  4. Will,
    I think you are right when you talk about risk analysis of empirical possibilities, though I think it would clarify our thinking if we explicitly distinguished the concept of risk from that of consequence. To give a trivial example, the risk of my car's brakes failing might be the same as that of the engine failing, but I will pay more attention to servicing my brakes because the consequences of failure are much more serious.
    In these empirical cases one should note that the risks and the consequences are verifiable empirically.

    Turning to the cases of dealing with "moral uncertainty" that you cite, how, when, and by whom or what process will the risks and consequences of our actions be definitively revealed ? By God (which one?) on judgement day? By a committee of international ethicists ? By personal intuition or revelation ? By a new generation of Cray Super-computers equipped with AI?

    In short, I think Dave Frame is quite right in stating that your argument is simply a reformulation of Pacal's wager.

  5. Hi,

    Thanks for these comments.

    1. One of the most common responses to this argument that I get is that "it's just Pascal's Wager again!" But this is only a good response to the argument if we suppose that my argument fails for the same reason that Pascal's fails. So I'd be interested to know: what do you think is the shared failing of my argument and Pascal's? (I could describe the many ways in which it is not like Pascal's wager, but I don't want to preempt you).

    2. Dave: Another interesting consequence of taking moral uncertainty into account is the way we should treat friends and family vis-a-vis strangers.

    Suppose you have partial belief in two moral views:

    Partialism: It's important to help strangers, but it's much more important (e.g. 10x as important) to help friends and family and co-nationals (etc).

    Impartialism: It's equally as important to help everyone.

    It seems right that we should be uncertain between these two views; whereas we should give vanishingly small credence to the idea that strangers are more important than friends and family.

    If so, then the actions that will maximise expected value are those that treat friends and family as somewhat more important than strangers: if you are acting rationally under moral uncertainty, you will act as if friends and family are of more value than the impartialist theory thinks, but of less value than the partialist theory thinks.

    So acting rationally under moral uncertainty means that sometimes you ought to give your child a smaller benefit over giving a stranger a larger benefit. The precise weighing will depend on exactly how likely you think partialism and impartialism are.

  6. Hello Will,
    I believe your argument fails for the reasons I gave in my first reply (and to which you have not responded, but that's your right).
    Regarding Pascal's wager, there are many reasons why it fails. To cite but one : Pascal imagines the possibility that God exists, so (in order to play safe on one's deathbed when there's nothing left to lose) the wise call is to believe. However, it is equally rational to imagine that the Devil has overthrown God as master of the universe and the after-life (one might even conclude after looking at 20th-century world history that this is a more plausible hypothesis) : in which case one would have a lot to lose in proclaiming allegiance to the now deposed ruler of the heavens. Which way will you bet, Will? Rien ne va plus…..

  7. Hi Will, thanks for the reply.

    I think (1) turns on the "m0ral risk", ie the consequences times the probability. Imagine a distribution of possible moral statuses of killing an animal, x(i), where all the (i) different values of x are the different possible moral significances of "killing for food". And imagine a probability distribution such that P(i) is the probability of being in state i. Then the moral risk associated with being in state i, MR=P(i)*x(i). I think what you'd need to hold for "Will's Wager" to hold would be that considerations about the tail dominate the analysis: that is, you need basically unlimited downside risk in at some value where i>threshold (say the i of maximum consequence i=imax). This is how it is with Pascal's wager, since your soul burning in hell forever would clearly suck, forever. This is also how it is in Weitzman's Dismal Theorem**, where there is (allegedly, since this is conceptually contestable) a singularity in welfare if utility goes to zero. In these cases, it's the unlimited downside that drive the risk averse result, that is x(imax) is huge, such that P(imax)*x(imax)>some decision-relevant threshold. Now in the case of killing a cow for dinner, I'm not sure that the consequences are so severe as to dominate and force me into that result. Even if x(imax)=some flavour of unwarranted killing, that's some way from the everybody-loses-everything world of Weitzman or eternal suffering of Pascal. And I assign low probability to that world. So for me, that tail isn't so severe as to dominate the analysis. So in short, I see the common feature of you, Weitzman and Pascal being the repugnance of the worst outcome driving the risk assessment. I just think it isn't obvious that this is a very portable result, and where it's not it's not clear that one is being "morally reckless" as you allege in your three examples.

    Re (2) I actually have a pretty good idea how my weighting function varies with distance from me. And I'm not terribly uncertain about it. I *reject* impartialism at the individual scale. That is, having thought about this a lot over the last ten or so years, I assign it zero weight since I don't count it among the views over which I might be uncertain. You might be, but then you might also assign non-zero weight to the existence of the flying spaghetti monster and other entities that I also reject. I don't believe that these beliefs entail any normative uncertainty for me as an individual. [They may, if we are a community, acting as a community or institution. But that is a separate issue.]

    **At least in its original formulation.

    1. Sorry – I meant "where all the (i) correspond to different interpretations of x, the possible moral significances of "killing for food" You get the picture – all the "i"s are the different plausible stories over which we are uncertain. They each have a moral significance (x(i)) and a probability (P(i)). It's just a recasting of normative uncertainty into a risk space.

      [Am I the only one who finds it bloody hard to proof-read on this blog?]

    2. In general, the fact that most of the benefit from doing some action comes from avoiding a tail risk does not imply that the act can only be justified on Pascalian grounds. Many people buy life insurance so that their family is protected from certain tail risks, but this is not objectionably Pascalian. People who run nuclear reactors spend a lot on safety to protect from tail risks, but this is not objectionably Pascalian.

      Things get objectionably Pascalian, I'd say, when the probabilities get very, very small and the risks remain extremely large, or perhaps infinite. I do not think this is true of Will's examples. Risks can be under 1% probability without being objectionably Pascalian (consider life insurance and nuclear reactors again). More to the point, if you thought there was a 1% chance that buying an expensive dinner would leave a kid drowning in a nearby pond to his death, it would not be objectionably Pascalian to refuse to buy the expensive dinner. Likewise, if you think there is even a 1% chance that Singer's view is correct (at least for cases where you haven't already given away a large portion of your income), it is plausible that you should refuse to buy an expensive dinner, provided the funds could be used to save a life. (It probably costs more than that to save a life, but that doesn't really affect the philosophical point.)

      I believe it would be overconfident to assign less than 1% probability to any of the moral controversies that Will has described. Moral disagreement implies a lot of moral error, and it would be unreasonable to be confident in our own positions. Moreover, the processes that lead us to our moral convictions, especially on these issues, seem especially unsound (coalition pressures and such).

      Anyway, I am inclined to think that Will is right about all of this, and that comparisons with Pascal are out of place. Such charges would only be justified if you think you have overwhelming evidence, putting the odds of strong animal welfare views, pro-life views, or Singerism about charity at probabilities on the order of 1/1000.

      1. You are ansolutely right, Nick, about managing risk in the real empirical world – although I would still advocate distinguising risk (=possibility) and consequences (how serious they are). However, in the real world it is possible to gather evidence to assess both risks and consequences. Even in cases where there is serious dipute between experts, such as nuclear
        power, and where not all the evidence is currently available, at least we know what the evidence would be like. We will, one day, know who was right.
        What sort of evidence will count to assess the risks that abortion is wrong ? As I replied earlier to Will, there seems ro be a major difficulty here, unless you really believe in divine revelation, judgement day or some other definitive arbitration mechanism that all would accept.

        1. PS : I hope you're really warmly dressed whilst keeping a watch for drowning children in your local park. You never know what might happen : catching pneumonia is a risk which could have serious consequences…..

      2. Nick wrote: "I believe it would be overconfident to assign less than 1% probability to any of the moral controversies that Will has described."

        I'm not sure you're licenced to set other people's probabilities – in the standard interpretation of probability these ought to be subjective degrees of belief. You can mount an argument to the effect that "in line with [Nick's] reading of the literature, it would be overconfident to assign less than 1% probability…" but I'm not sure this is a reflection of anything other than your reading of the literature. I don't – as an independent moral agent – see why I'd be bound by your floor probabilities anymore than I'd be bound by Rick Santorum's. [This is not to say that I'd assign the same weight to your arguments that I would to his; this is just to say that I don't conceive of moral reflection as something over which someone else has a licence to set my subjective degrees of belief.]

        At an individual level I find this general line of argument unconvincing, for reasons that have to do with the status of expected utility as a principle for historically-located individuals. [See next paragraph.] But it may lead to some interesting conversations regarding communities, where the communities are split regarding moral questions. One could object to the meaningfulness of subjective probabilities in such a context, but there are actually some interesting arguments about intersubjective probabilities that might be relevant (see towards the end of Donald Gillies' Philosophical theories of probability). [I can expand if anyone's interested…] I could see it as an interesting way of exploring moral disagreement even though that isn't identical with uncertainty…). It might be fun to consider live political issues in this regard, but I don't see why it's all that valuable at the individual level.

        There *are* moral consequences over which I am uncertain, but for the most part I think this is dominated by my uncertainty about what sort of moral principles I ought to use: as I say, I'm not a consequentialist – even of the rule variety – all the time, since I find some features of deontological and virtue ethics valuable, decision relevant and intuitively preferable, reasonably often (several times a day, at least). And thse considerations lead me to assign zero weight to what Will called impartialism because I think it is – when you integrate those considerations – unjustified. The idea that everyone in the world is equally deserving of my consideration strikes me as lacking justification. It's just a cognitive bias (anchoring) – just an artefact of the fact that we intuitively focus on splitting something X ways if we have X people. But going halvsies in a pizza with your mate is weak justification that is easily trumped by other considerations (eg more for her because she's hungry or more for me because I paid for it). I think it would be downright nasty to those close to me to apply impartialism all the way down. My parents, for instance, invested hugely in me with their time and love and consideration – they gave my welfare a high weight so that I might flourish. Dude in Tanzania didn't. If you think reciprocity matters, this difference is morally relevant. There *may* be certain senses in which my proverbial dude in Tanzania has certain moral calls on me that my parents do/did not (thoughI remain kinda sceptical). What I utterly reject is the idea that I should ignore all potentially morally relevant rules/principles/heuristics (eg considerations of virtue ethics/duties of care/reciprocity/etc) except the maximisation of expected utility from my next dollar. Doing so would not, in my moral universe, make me a great guy. It would make me a morally narrow-minded guy, a poor son, father, husband and dude generally.

  8. Hi Will, there's a bit of a slide in the argument between what there's a "reasonable chance" of, and what one "shouldn't be certain" of, isn't there? The reason I shouldn't speed round blind corners is not merely that I shouldn't be certain that doing so won't kill anyone, but rather because there is a reasonable chance that my doing so will kill someone. After all, I shouldn't be *certain* that my choosing to drive at 20mph on a straight road on the school run won't kill anyone. But you surely you don't want to claim that I should desist from the school run altogether.

    So you need to establish that there's a "reasonable chance" that abortions and eating animals are in the same ballpark as murder, and that spending money on luxuries is in the same ballpark as walking past a child drowning in a pond. And I'm afraid that I see no good reason to think this is true.

    Moreover, in the murder case and the other example, part of what makes the overtly wrong acts very wrong indeed is that they are intentionally committed wrongs. If I inadvertently commited similar acts, this would be more like manslaughter or accidental death than murder. And most of us think this would be much less wrong.

    I've been practicing with my garage band, and I've appointed you as my agent for booking our first gig some time in the summer. Most likely, there'll only be a couple of dozen people in the audience. But you don't want to have to turn away people at the door – and if we were get really, really popular (via social networking) in the meantime, you might possibly have to turn away hundreds of thousands of fans. So will you be booking us into Wembley Stadium?

    1. Simon wrote: "I’ve been practicing with my garage band, and I’ve appointed you as my agent for booking our first gig some time in the summer. Most likely, there’ll only be a couple of dozen people in the audience. But you don’t want to have to turn away people at the door – and if we were get really, really popular (via social networking) in the meantime, you might possibly have to turn away hundreds of thousands of fans. So will you be booking us into Wembley Stadium?"

      Nice – I think that's the most upbeat reading of the precautionary thing I've ever read. I don't see quite why Will has chosen examples that seem to rely on precautionary approaches – basically Will's angle seems to be, "if X could be really bad, and you can't rule out X, then you're required to avoid X." But this isn't obviously true since (1) there may be things which I can't formally rule out but to which I assign zero effective subjective probability**; (2) "Not X" might also be really bad, and have unlimited downsides, depending on exactly how the cookie crumbles. That's why I say I don't think that this line of reasoning is going to be very portable.

      **Few regular readers of this blog would assign biblical literalism a non-zero subjective probability in terms of their decision/daily routines. But we cannot rule it out formally.

  9. Anthony:

    1) I don't see why you keep insisting on the importance of distinguishing between "risk" and "consequence". Will explicitly distinguishes between these two concepts in explaining what maximizing expected value comes to.

    2) Why does it matter whether the consequences of one's action can be "definitively revealed"? Even in cases of empirical uncertainty, the present subjective value of an action (i.e. the kind of value that Will's talking about) is in no way dependent upon which state of affairs actually obtains. Certainly, it is independent of whether anyone will ever *know* which one obtains. Is the worry that if we can't empirically verify moral claims, then those claims are false, or meaningless perhaps? If so, then this is a worry about the propriety of giving moral advice generally, not giving moral advice specifically in conditions of uncertainty. Is the worry that, if we can't empirically verify moral claims, it doesn't make sense to talk about evidence for/against them, and hence, that it doesn't make sense to talk about their probabilities? If so, then let me know; responding to this will require a separate post. If it's not either of these, then what's the worry?

    Simon:

    1) When you talk about "intentionally committed wrongs", do you mean cases where I intentionally do A, with knowledge that A is wrong, or cases where I intentionally do A, with knowledge of the features that, as the correct account of morality would have it, make A wrong? If the former, then I don't think an act's being an intentionally committed wrong, as such, goes towards making that act wrong. Caligula's acts are no better for the fact that he thought they were right. (They are, perhaps, more rational…) If the latter, well then the people in Will's examples have the relevant sort of knowledge, so these are intentionally committed wrongs if they are wrong at all.

    Generally:

    Several people are attacking precautionary reasoning through examples. All this establishes is that there are examples where precautionary reasoning is bad. But who's denying that? As I see it, Will is arguing that some general decision principle like "maximize expected value" applies to cases of moral uncertainty, and that it follows from this that precautionary reasoning is appropriate in some cases of such uncertainty. The "it follows" part seems unassailable, so shouldn't the focus of debate be on the correctness of these general decision principles or their applicability to cases of moral uncertainty?

    1. Hello Andrew, and thank you for your comments.

      Why do I recommend distinguishing risks from consequences ? Because otherwise you have muddled thinking. On reading Will's fourth paragraph several times, I acknowledge that at the beginning he comes close to explicitly distinguishing the two. However, he continues : 
      "One implication of maximizing expected value is that sometimes one should refrain from a course of action, not on the basis that it will probably be a bad thing to do, but rather because there is a reasonable chance that it will be a bad thing to do, and that, if it’s bad thing to do, then it’s really bad."
      That last non-sequitur surprises me; why, if there is a risk that something we do is bad, should its consequences be "really bad"?
      He continues : "So, for example, you ought not to speed round blind corners: the reason why isn’t because it’s likely that you will run someone over if you do so.  Rather, the reason is that there’s some chance that you will – and it would be seriously bad if you did."
      No! The consequences of running someone over are just the same, whether it is on a blind bend or on a straight. It's only the risk that changes.

      Regarding your second comment, my view is :
      A) One cannot transcribe a mathematical concept such as decision theory from the empirical to the moral world – unless one can say somehow what evidence would allow us (even if only in theory) to judge either  the consequences or the risks (or both) – in Will's examples, of those of an action being wrong. So far, I have not seen such a proposition, other than intuition. Intuition is fine as a spur to finding evidence to back up a hunch, but let's not dress it up as mathematics.
      B) I'm not saying that "if we can’t empirically verify moral claims, it doesn’t make sense to talk about evidence for/against them". What I am, more modestly, claiming is that if you claim that something is bad, you should at least outline what criteria you use to judge it. I think that there are plenty of possibilities of invoking evidence to support or attack a moral view. But invoking metaphysical possibilities and dressing them up as decision theory will not do.

  10. It's cute argument but I'm not sure I buy it, for essentially the same reason as Anthony: who is going to decide that what we thought was not wrong is wrong? In other words, it smack of moral realism.

    Still, the examples are illuminating. If we translated it into something like the following I'd be more convinced: "We might currently think that killing animals is not wrong, but if we change ur minds later we're going to feel REALLY guilty." I think this type of reasoning also has more emotional power, and is thus an even more "practical" form of ethics.

    Re Dave's combination of virtue ethics, consequentialism and seasonings of deontological argument, can't we just go for rule utilitarianism? Talk of "maximising expected value" can seem awfully, coldly calculating – even to an ex-mathematician like myself – but is the real problem that we fear we're going to have to do a cost-benefit analysis every time we take a decision? Virtues and deontological arguments all seem to qualify as "rules" in this context, and seeing them as such has the advantage (from my perspective) that it is,in the end the (likely) consequences that determine, in general, whether an action is ethical.

    Of course, even if we accept this we still need to decide whose utility we are trying to maximise (and what precisely we mean by "utility"), so Will's examples still apply. I'd just prefer to be more subjectivist about it.I still believe it's up to us to decide what we consider ethical. It's not something we "discover", like a mathematical proof or a scientific law.

  11. (reposted from 80k)

    My problem with Will's approach is that it seems to involve question begging. When we maximise expected value given factual uncertainty, we've largely picked our ethical position already – subject to clarifying what we think 'value' is.

    So to maximise the expected morality given various possible moral positions seems to be weighing them all on that same broadly consequentialist position. But if that position is right, then we can infer that any incompatible ethical view is wrong, so we don't need to compromise between them. And if it's wrong then a) we don't need to account for its dicta, and b) it can't give us a sensible guide to how to compromise between the other systems.

    If, for the sake of simplicity, we do accept that distributing our confidence among the ethical systems philosophers offer us is the right approach, we still have a couple of dilemmas.

    Firstly, how do we decide how much weight to put in each competing system? Do we give equal weight to each proposed system? (this would allow philosophers to 'cheat', by proposing several very similar systems and presenting them as multiple alternatives – and obviously doesn't give much consideration to the possibility of some systems being more convincing than others).

    Or do we weight according to expert consensus? But expert consensus among philosophers is something it's hard to swallow. Expertise in a field usually entails knowing more of the implications of certain near-universally accepted principles than a layman (and perhaps of a greater set of further facts that are relevant to them). But what principles are widely enough accepted by (moral) philosophers to grant them such expert status?

    Secondly, your examples don't seem to compare like for like. The sort of ethic which says that spending money on luxuries is bad normally treats its badness as quantifiable. It might be bad for me to only give £1 to charity, but it's worse for me to only give 50p, and better though bad still by Singer's view to give only £2.

    The sort of ethic which calls abortion murder typically treats its badness as absolute. No matter who it saves, killing is a total wrong, so the negative value is infinite.

    So how can you compare between them in any meaningful way, when granting *any* credence to the latter view would mean that having one abortion and giving some vast but finite sum of money to charity is still (infinitely) worse than giving no money at all but not having an abortion?

    Needless to say, I think the inflexibility of the latter view is its proponents' problem…

Comments are closed.