Skip to content

Moral Offsetting

A recent blogpost on 3 Quarks Daily satirised the idea of ‘moral offsetting’. Moral offsetting would work like carbon offsetting. With carbon offsetting, you purchase carbon credits to offset against your emissions – for instance, you might give money to a private company that plants trees, to offset your transatlantic flights. Moral offsetting works in a similar way: whenever you indulge in behavior of dubious morality (say eating meat, or buying clothes made in a sweatshop), your transgression would be offset. The simplest way to offset would be through a donation to a charity.

Of course, both what counts as a transgression and what kind and amount of donation would be hard to calculate. Whereas carbon offsetting aims to reduce a single readily measurable output, moral offsetting is much more complicated. If you’re consequentialist, you may think that there is a single output to be measured and maximized (welfare, say); even so, converting the expected consequences of actions and omissions into a single measure would be extremely difficult. For those consequentialists who recognize multiple goods things will be harder still. Of course, not everyone is a consequentialist, but since virtue theorists and deontologists recognize that consequences do matter, moral offsetting may have something to offer them too.

Carbon offsetting is controversial, with some people regarding it as a license to perform morally wrong actions (it has been compared to the pre-reformation practice of selling indulgences). Whatever the merits of carbon offsetting, I think moral offsetting has a lot going for it. It might help combat the moral offsetting we already do.

Moral self-licensing is the phenomenon of agents making choices that are morally dubious subsequent to the boost to the self they experience after making a morally good choice (for instance, purchasing less environmentally friendly products after buying free-range eggs). Making the offsetting automatic ensures that any such licensing is compensated for, and that the amount of indulgence allowed never exceeds the offset. In fact, the offsetting can be calibrated so that any licensing – or any other morally dubious behavior – is more than compensated for. If such a setting is used, the more someone engages in morally dubious behavior, the more overall good they do, since each time the offset more than compensates for the wrong.

How would the practical problems of measuring the negative impact of morally dubious actions and the positive impact of donations or other offsets be solved? There are two ways to approach the problem (both of which are envisaged in the 3QD post). One is to allow individuals to preset their own offsets. Someone might produce a list of the kinds of things they regard as moral transgressions that they are nevertheless likely to indulge in and offsets that they regard as appropriate. They would probably need guidance in this; if I were doing this, I would benefit from expert information about how much animal suffering is involved in producing dairy products, caged eggs, and so on; how much environmental damage is caused by mining of rare earth minerals for smart phones, and so on, as well as information about the impact of potential offsets. An alternative method would be to turn the entire procedure over to experts (perhaps one of a range of experts: a utilitarian moral philosopher, a political theorist – or a range of political theorists, one for each mainstream political view – a priest, or a range of priests, and so on). We could then choose from a range of packages: 1.offset your wrongdoing; 2. offset your wrongdoing plus 5%; 3. plus 10%, and so on (of course, these figures could only be approximate: no one would think we could calculate the value of morally dubious actions and offsets precisely).

What objections would such a scheme face? Virtue ethicists might object that a mechanical offset doesn’t allow the person to cultivate the virtues. If you’re worried by that, you might leave a large sphere of action outside the offsetting regime. Some kinds of actions don’t seem to be the kind of thing that should be offset at all, either because one wouldn’t want to give oneself any kind of permission to engage in them (assault, say) or because it is too personal a matter (say helping your aged mum). The latter, in particular, seem an appropriate sphere in which to cultivate the virtues. I suppose that a virtue ethicist might think that it is better not to do something wrong in the first place than to perform a wrongful action and overcompensate for it mechanically. Such a virtue ethicist might still have a use for moral offsetting: they might limit its application to actions that are morally permissible though also morally costly (saying buying gifts for one’s relatives; these actions may be costly inasmuch as they unnecessarily use resources or expend money that could have been donated instead). Deontologists can use similar strategies, excluding certain actions entirely from those that can be offset (indeed, consequentialists might follow a similar path, for consequentialist reasons: there may be some actions that we don’t want to encourage in ourselves, even if they are compensated for).

So moral offsetting has its limitations. It can’t replace ethical decision-making by individuals. Still, it may be a powerful way of ensuring that our negative impact on the world and on each other is minimized.

Share on

1 Comment on this post

  1. But what is a poor baronet to do, when a whole picture gallery of ancestors step down from their frames and threaten him with an excruciating death if he hesitate to commit his daily crime? But ha! ha! I am even with them! (mysteriously) I get my crime over the first thing in the morning, and then, ha! ha! for the rest of the day I do good! I do good! I do good! (melodramatically) Two days since, I stole a child and built an orphan asylum. Yesterday I robbed a bank and endowed a bishopric. To-day I carry off Rose Maybud and atone with a cathedral!

    W.S. Gilbert, Ruddigore

Comments are closed.