Enhanced Consequentialism: Up, Up… and Away?
Poor Superman, trapped in a spiral of consequentialist logic! If one really is as powerful as Superman, then it’s no use pleading for a bit of “me time” on the grounds that one’s individual decisions don’t make that much of a difference. For Superman, it really is true that “every second of quibbling is another dead baby.” Even if we let Superman assign a little more value to his own interests and projects (such as fighting criminals) than to those of everyone else, his preferences still completely disappear in the consequentialist calculus. He might find a life of turbine-operation incredibly miserable, but the loss of good to others if he stops is just astronomically large.
Fine, you’ll say: consequentialism makes outrageous demands of comic book characters. So what? Well, I’m about to argue, the rest of us may soon become much more like Superman in this regard – and if you’re a consequentialist, you don’t get a (moral) choice in the matter.
Start from an objection sometimes raised against consequentialist philosophers (ethicists who say that the right thing to do is whatever produces the best consequences): Hey, if you’re so dedicated to doing whatever makes the world best, how about you quit doing moral philosophy, go start a hedge fund, and give the profits to reputable charities? Surely, says the objection, the good done by the money you’d earn in such a venture far outweighs whatever good you might be doing propounding consequentialist moral theory.
This isn’t a very good objection, because it makes a really questionable empirical assumption. As brilliant as many moral philosophers are, that sort of intellectual power isn’t necessarily the same sort one needs to be a very successful hedge fund manager. A consequentialist moral philosopher might quite reasonably say: I’ve no reason to expect I’d actually produce much good in finance, but I have a reasonable expectation of producing at least some good in my present work, so the best choice is to continue.
Right. But now suppose we offer the consequentialist moral philosopher a fantastic new medical breakthrough. It’s a pill – called a Cognitive Enhancement pill – which drastically heightens one’s facility with numbers and intuitive sense of probabilities. In short (let’s suppose) this pill would make one a much better candidate to be a really successful hedge fund manager. The more plausible this empirical assumption about the effects of the pill, the more plausible the conclusion that the consequentialist philosopher is morally required to take the pill – and then leave philosophy for the City.
This conclusion generalizes wickedly. Suppose we develop another form of enhancement that makes the user much stronger and provides far greater endurance. If you’ve had this treatment and are now much stronger and hardier than ordinary folk, don’t you have a particular obligation to, say, spend your life in disaster areas, efficiently unloading aid pallets from cargo ships? And if you’re a consequentialist and you know that receiving such a treatment would leave you so much better equipped to do great good in the world, how could you justify not undertaking it? And so on – for each new potential enhancement, the consequentialist will always be confronted with the fact that, if she only took this pill (wore this brain-stimulating device, underwent this gene therapy…), the world could become a much better place.
As with Superman, the better you are at producing good in the world, the less permissible it is for you to spend your time doing things other than producing good in the world. The enhanced consequentialist must devote more and more of her life to such projects. She may have to cut ties with her family and move to where good most needs doing. She may be forced to accept that the forms of doing-good that she finds rewarding simply don’t have room when other things are demonstrably more consequential. She may have to abandon other thing she cares about (her unenhanced painting abilities, for instance) because the hours she wastes on these could have been spent producing a lot of good in the world. At extremes, she may need to do nothing but whatever drudgery will produce massive good.
Some people want to be superheroes, whatever the cost. With great power comes great responsibility, etc. Fair enough. But the consequentialist doesn’t get a choice. If enhancement technology makes a personally-unrewarding but general-good-exploding existence possible, the consequentialist may not morally opt out. Consequentialism, then, is a moral theory that may rather soon require its adherents to almost entirely sacrifice their individual preferences and aims in the pursuit of enhanced utility-maximization. Is this an argument against consequentialism? It’s hard for me to say, since I was never a consequentialist to begin with. (Or couldn’t you tell?)
In the end, the problem may cure itself: if enhancement spreads far enough, people who are presently very poor might be brought to a much more humane standard of living, and demands on the best-off will accordingly lessen. But it will be some time (if ever) before enhancement is so widely available. Meanwhile, only those who are fairly well-off will be able to afford it, and so the moral burden will fall squarely, and heavily, on their super shoulders. Will that distant enhanced utopia look back upon them with gratitude for the great sacrifices they had to make? Or will they, like Superman, end expended and unsung – a transitional utility source?