Skip to content

Vagueness and Making a Difference

Do you make the world a worse place by purchasing factory-farmed chicken, or by paying for a seat on a transatlantic flight?  Do you have moral reason to, and should you, refrain from doing these things?  It is very unlikely that any individual act of either of these two sorts would in fact bring about a worse outcome, even if many such acts together would.  In the case of factory-farming, the chance that your small purchase would be the one to signal that demand for chicken has increased, in turn leading farmers to increase the number of chickens raised for the next round, is very small.  Nonetheless, there is some chance that your purchase would trigger this negative effect, and since the negative effect is very large, the expected disutility of your act is significant, arguably sufficient to condemn it.  This is true of any such purchasing act, as long as the purchaser is ignorant (as is almost always the case) of where she stands in relation to the ‘triggering’ purchase.

Arguably there are many cases that cannot be dealt with in such a straightforward way.  These are cases where a large number of acts, taken together, make the world a worse place, but none of these acts makes any negative difference on its own.  In these cases, there is no ‘triggering’ act as in the factory-farming case, and so, arguably, a straightforward expected utility calculation would be insufficient to condemn any of the individual acts.  Taking transatlantic flights or engaging in other carbon-emitting activities that collectively damage the environment are arguably like this, as there may be vagueness about when environmental damage occurs.

Plausibly, no plucking of any single hair on my scalp would make me into a bald man.  And yet, together, several thousand such pluckings would do the trick.  Perhaps there is a similar phenomenon in the case of environmental damage:  no single walking across the grassy quad ruins it, no single small carbon emission destroys the atmosphere, and so on, but many such acts are collectively destructive.  Consider a version of a more stylized case from Parfit:  there are 1000 settings on an electric torture device, which has been hooked up to a victim.  The victim can’t tell the difference between adjacent settings, but would certainly be in no pain at all if the device were at its lowest setting and would be in excruciating pain if it were cranked all the way up to ‘1000’.  Next, each of 1000 people (who we can suppose don’t coordinate with each other) turn the device up just one setting each, leaving the victim in agony.  Each of the 1000 people can, it seems, claim that their act made no negative difference at all, since the victim can’t tell the difference between adjacent settings (we can suppose there’s no phenomenological difference whatever to the victim between adjacent settings).  It seems there is vagueness about when the victim’s pain level increases.

What to say about these cases which seem to involve vagueness?  Here are some options.  First, we could say that since there is no chance that any individual act would make the world a worse place, and since we can condemn such an act only if it would actually or likely make the world a worse place, no such individual act can be condemned at all.  This option is unsatisfying for the reason that, intuitively, there is something morally to be said against each individual’s turning-up of the torture device in Parfit’s case.  Second, we could condemn the individual acts without directly appealing to their effects.  For example, with Kantians or rule consequentialists, we could say the relevant moral test of an act is ‘what if everyone acted that way?’  But there are independent problems with these views, and even if there weren’t it is intuitive that we can condemn the individual acts in these cases at least by some sort of direct appeal to their effects (i.e. it is intuitive that we can condemn these acts at least on act consequentialist grounds even if we aren’t act consequentialists).  Third, we could, with epistemicists, claim that vagueness really is just a kind of ignorance:  in fact there is a single hair plucking that would turn me into a bald man, we just don’t know which one it is; in fact there is an individual turning-up of the torture device that would increase the victim’s pain level, we just don’t know what it is, and so on.  If this were right, it appears we could treat cases like Parfit’s and that of transatlantic flying as we would the factory-farming case – that is, as cases in which there are ‘triggering’ acts, such that we can as before use an expected utility calculation to condemn each individual act.  I confess I am somewhat sympathetic to this third option, but epistemicism is controversial.  So I will end the post with the following fourth option, to which I am also somewhat sympathetic:

In cases where it is genuinely indeterminate whether your act makes the world a worse place, you have a moral reason not to perform this act.  The fact that it’s indeterminate whether it would make the world worse itself counts against the performance of the act (to what extent it counts against it, how to weigh this against competing considerations, and so on, is a further question).  This simple thought seems attractive to my mind, but here’s a rival thought that doesn’t strike me as obviously incorrect:  if it’s indeterminate whether your act makes the world a worse place, then it’s correspondingly indeterminate whether you have a moral reason not to perform it.  A defender of this rival thought might argue that in being attracted to the ‘simple thought’, I am conceiving of indeterminacy as akin to uncertainty or ignorance (like an epistemicist), and reasoning that if there’s a chance of your act making the world worse, then you have reason not to do it.  But this isn’t what I’m thinking; I’m simply thinking that it’s worth avoiding acting in a way such that it is indeterminate whether so acting makes the world a worse place.  A similar thought seems attractive in the case of egoistic concern:  Suppose I can either undergo a process that leaves me as well off as I would have been had nothing happened or I can undergo a second process whereby it is indeterminate whether things will be the same as the first process or instead whether I will suffer horribly for decades and then die.  It strikes me as plausible that I have a reason to avoid the second process that I don’t have to avoid the first one.

Even if I’m wrong about this, and the ‘rival thought’ is correct, we would still face the question of what to do in cases where it is indeterminate whether you have reason not to perform an act.  I have intuitions about what to do in some such cases, at least when other things are equal:  that is, if it were between doing an act that you have determinately no positive reason to do and determinately no reason not to do, on the one hand, and an act that you have determinately no positive reason to do and indeterminately reason not to do, on the other, it is determinate that you should do the former act.  What to do in cases where other things are not equal seems a further, more difficult, question.  For some fuller discussions on what to do in cases of vagueness or indeterminacy, see this paper, as well as this one.

At any rate, the fourth option sketched above at least offers a defensible way to condemn individual acts in ‘collective harm’ cases like Parfit’s while avoiding the problems that other options face.

(Many thanks to Roger Crisp, Teru Thomas, and Caleb Ontiveros, for their very helpful comments.)

Share on

15 Comment on this post

  1. Richard Yetter Chappell

    Hi Theron, I think it’s actually mistaken to interpret carbon-emitting cases, etc., as involving moral vagueness at all. It’s simply a matter of graded harms, which aren’t paradoxical at all — see

    Also, your characterization of the self-torturer case (as involving “no phenomenological difference whatever to the victim between adjacent settings”) is incoherent. If pairwise indiscriminability is not transitive, as in the described case, then it is thereby shown to be insufficient for phenomenal identity (which, by the logic of identity, must be transitive). See

    1. Thanks Richard. I am sympathetic to both of your points (and those are nice posts of yours!), and am thinking of my ‘fourth option’ as what I’d say if these or other non-triggering collective harm cases did exist such that they gave rise to the relevant sort of vagueness. Roughly, I think I can grant a lot to those skeptical of Kagan’s treatment of collective harm cases (like Nefsky), but still (with Kagan) condemn the individual acts in these cases at least by some sort of direct appeal to their effects.

      1. I think there probably is vagueness in the climate change case. The climate has no preferences, so there is no analogue to welfare outcomes in Richard’s overfishing example. People do have preferences, but in each case one can imagine an incrementally warmer climate in which the benefits from additional emissions have gone into improving human welfare (say of the very poorest and most vulnerable to climate change) such that they are actually better off in that warmer climate. (That may be exceptionally unlikely, but it certainly seems possible – I can easily imagine worlds at (say) 2.3C warmer than pre-industrial in which people are happier and better off than many worlds at 2.0C warmer than pre-industrial. Those worlds might involve greater levels of development in currently poor countries in the next few decades.)

        1. Richard Yetter Chappell

          Hi Dave, how is there any vagueness in the possible situation you describe? Each increment in temperature has some precise (metaphysically non-vague — which is not to say epistemically predictable!) physical implications, and those in turn will have precise implications, be they positive or negative, for sentient beings’ welfare. The only vagueness here is in the words we use to describe the outcomes — whether the possible bad outcome qualifies as a “catastrophe”, say, or whether the possible good outcome is a “great boon” or some such.

          1. Richard and Dave, do you suppose it will matter for this disagreement whether there is more to welfare than pleasure (I might desire not to be bald), or whether there’s more to goodness than welfare?

            1. Richard Yetter Chappell

              I think objective pluralist views of welfare (or value more broadly) should respect the constraint that vague terms don’t track what’s of fundamental significance. Crude desire theories may not, as you point out. Though it’s worth flagging how very *odd* it would be for a person to *really* have a fundamental desire not to be bald. More plausibly, people to whom we’re tempted to attribute this desire really have a collection of related precise (but graded) desires about how they wish to be perceived and regarded by others (and perhaps themselves), none of which essentially depends upon the mere word “bald”.

                1. Richard Yetter Chappell

                  Yes, but I think any other example will be susceptible to a similar style of reply. This is because vagueness is semantic, and words aren’t what matter (or what anyone sane fundamentally cares about).

          2. Richard wrote: “Each increment in temperature has some precise (metaphysically non-vague — which is not to say epistemically predictable!) physical implications, and those in turn will have precise implications, be they positive or negative, for sentient beings’ welfare.”

            I don’t think this is right. There are multiple inputs in the real world, and I don’t think that it’s true to claim that some precise physical climate event has precise implications for sentient beings’ welfare – those sentient beings are making non-climate decisions all the time that reduce or amplify their vulnerability to those events. So the same event could have a huge range of welfare implications, depending on a whole bunch of other decision inputs. It’s not a single-valued function.

  2. another argument in these cases is that we are aware of the actions of the other actors and of the cumulative effect of our actions with theirs. this gives us a reason to avoid the action, knowing the overall consequence.

    1. Thanks Caley. I think more would need to be said about what the relevant collection of ‘other actors’ is (presumably I am a member of billions of different collections of other actors, and I can be made aware of the fact that our actions taken together have various good/bad effects). Even if we satisfactorily settle this ‘grouping’ issue, would the fact that we together do harm imply that I have a reason to avoid acting, if I know that my action will itself make no negative difference at all? Maybe you have in mind a kind of rule consequentialist reason against acting in such cases (as I mentioned in option 2)?

  3. This seems to be intended solely for the academic ethicists, but if so, then the two initial cases are badly chosen. In the real world there is no consensus on one option being so obviously the wrong choice, or having negative consequences. Climate sceptics don’t think long-haul flights cause climate change in any way, not even if there are hundreds of millions of long-haul flights. For them, there is no increment or vagueness: it is all harmless. Similarly there are many who don’t accept the logic of opposition to factory farming: we could call them ‘vegasceptics’. Farm animals are there to be eaten, they think, and in any case they are humanely slaughtered, end of discussion.

    If you don’t think this is relevant, then google “rolling coal”…

    Conservatives who show their annoyance with liberals, Obama, and the EPA by blowing black smoke from their trucks.
    For as little as $500, anyone with a diesel truck and a dream can install a smoke stack and the equipment that lets a driver “trick the engine” into needing more fuel. The result is a burst of black smoke that doubles as a political or cultural statement—a protest against the EPA, a ritual shaming of hybrid “rice burners,” and a stellar source of truck memes.
    (from Slate)

    The article also mentions a campaign to deliberately eat doughnuts, as a protest against anti-calorie laws. I once saw a forum post from a Dutch nationalist who ate meat on the anniversary of Pim Fortuyn’s assassination, specifically out of spite for his vegan assassin.

    This can’t be dismissed as the work of a few crazy individuals. Opposition to all forms of political correctness, and its equivalent ‘environmental correctness’, is a significant political factor, and not only in the United States. Anti-leftism in a broad sense is common among European populist movements.

    In the United States this type of issue is referred to as ‘culture wars’, but ’value wars’ would be better. Although western political elites prefer to think in terms of a homogenous citizenry, the reality is that populations are fragmented, with widely differing value orientations. That has consequences for the issues that Theron Pummer uses as examples. In the real world, it will not be possible to address such issues, in such an abstract and mathematical way.

    I think that in general modern societies are as their members intend them to be, harm included. That’s quite difficult for elites to accept, because it implies that the population is generally malicious. The transatlantic air passengers are deliberately increasing carbon emissions, because they think that is a good thing. Meat-eaters are deliberately harming farm animals, because they want them to be used for human convenience. So it is not a case of everyone being agreed that something is harmful, and then looking for strategies to diminish the harm. We are not agreed, that’s the point. Values differ, and in most cases political divisions underscore that.

  4. Paul said, “Values differ, and in most cases political divisions underscore that.”

    If many individual value sets differ, hence producing differing outlooks, would that not facilitate politics and present perceived opportunities for the manipulate values as a means of creating change? If so at what point do individual values (un/recognised) expressed within their interaction with others become important enough to affect the issues in question? a brutally open and honest approach or an open yet vague approach better; For example interactions on the web as well as cultural conflicts frequently illustrate this issue where unknown, unperceived or unrecognized value conflicts exist. Does vagueness become an advantage or a disadvantage in those circumstances?

    1. These questions are well beyond the topic of the original post. To keep it on-topic: after reading about the ‘self-torturer’ case, I understand that there are specific ‘technical’ issues for ethicists here. However, by introducing real-world cases such as carbon emissions and factory farming, Theron Pummer implies there may be real-world applications. He does not go into specifics, but talk of incremental consumer choices is often associated with ‘nudge’ policies. These are popular with the ‘modernising’ right, because they seem to offer a way of avoiding radical measures.

      1. Thanks Paul. I didn’t have nudge policies in mind, but I can see why one might think to mention them in connection with cases like the ones I mentioned, i.e. cases where many small and seemingly insignificant contributions can ‘add up’ to something huge. I can also now see why you brought up consensus in your original comment, as this seems relevant to the acceptability of imposing nudge policies to avoid collective harms. Lack of consensus about whether factory farming and climate change are bad is, however, less relevant to the main point of my post, which is how to morally assess ‘small’ individual actions which (seemingly) make no negative difference at all in cases where many such actions do result in harm. Though not all agree, very many people do believe that factory farming and climate change are bad, and so the puzzle explored in this post will be of interest to them. But the examples of factory farming and climate change are useful even for people who (in my view mistakenly) don’t believe they’re ones involving collective harms – this is because they are familiar, they have been used in a lot of the literature on the philosophical topic of collective harms and ‘making a difference’ (see the exchange between Kagan and Nefsky for instance), and their structure is such that it’s not hard to see analogies between these examples and a wide range of others.

Comments are closed.