There appears to be lot of disagreement in moral philosophy. Whether these many apparent disagreements are deep and irresolvable, I believe there is at least one thing it is reasonable to agree on right now, whatever general moral view we adopt: that it is very important to reduce the risk that all intelligent beings on this planet are eliminated by an enormous catastrophe, such as a nuclear war. How we might in fact try to reduce such existential risks is discussed elsewhere. My claim here is only that we – whether we’re consequentialists, deontologists, or virtue ethicists – should all agree that we should try to save the world.
According to consequentialism, we should maximize the good, where this is taken to be the goodness, from an impartial perspective, of outcomes. Clearly one thing that makes an outcome good is that the people in it are doing well. There is little disagreement here. If the happiness or well-being of possible future people is just as important as that of people who already exist, and if they would have good lives, it is not hard to see how reducing existential risk is easily the most important thing in the whole world. This is for the familiar reason that there are so many people who could exist in the future – there are trillions upon trillions… upon trillions.
There are so many possible future people that reducing existential risk is arguably the most important thing in the world, even if the well-being of these possible people were given only 0.001% as much weight as that of existing people. Even on a wholly person-affecting view – according to which there’s nothing (apart from effects on existing people) to be said in favor of creating happy people – the case for reducing existential risk is very strong. As noted in this seminal paper, this case is strengthened by the fact that there’s a good chance that many existing people will, with the aid of life-extension technology, live very long and very high quality lives.
You might think what I have just argued applies to consequentialists only. There is a tendency to assume that, if an argument appeals to consequentialist considerations (the goodness of outcomes), it is irrelevant to non-consequentialists. But that is a huge mistake. Non-consequentialism is the view that there’s more that determines rightness than the goodness of consequences or outcomes; it is not the view that the latter don’t matter. Even John Rawls wrote, “All ethical doctrines worth our attention take consequences into account in judging rightness. One which did not would simply be irrational, crazy.” Minimally plausible versions of deontology and virtue ethics must be concerned in part with promoting the good, from an impartial point of view. They’d thus imply very strong reasons to reduce existential risk, at least when this doesn’t significantly involve doing harm to others or damaging one’s character.
What’s even more surprising, perhaps, is that even if our own good (or that of those near and dear to us) has much greater weight than goodness from the impartial “point of view of the universe,” indeed even if the latter is entirely morally irrelevant, we may nonetheless have very strong reasons to reduce existential risk. Even egoism, the view that each agent should maximize her own good, might imply strong reasons to reduce existential risk. It will depend, among other things, on what one’s own good consists in. If well-being consisted in pleasure only, it is somewhat harder to argue that egoism would imply strong reasons to reduce existential risk – perhaps we could argue that one would maximize her expected hedonic well-being by funding life extension technology or by having herself cryogenically frozen at the time of her bodily death as well as giving money to reduce existential risk (so that there is a world for her to live in!). I am not sure, however, how strong the reasons to do this would be. But views which imply that, if I don’t care about other people, I have no or very little reason to help them are not even minimally plausible views (in addition to hedonistic egoism, I here have in mind views that imply that one has no reason to perform an act unless one actually desires to do that act).
To be minimally plausible, egoism will need to be paired with a more sophisticated account of well-being. To see this, it is enough to consider, as Plato did, the possibility of a ring of invisibility – suppose that, while wearing it, Ayn could derive some pleasure by helping the poor, but instead could derive just a bit more by severely harming them. Hedonistic egoism would absurdly imply she should do the latter. To avoid this implication, egoists would need to build something like the meaningfulness of a life into well-being, in some robust way, where this would to a significant extent be a function of other-regarding concerns (see chapter 12 of this classic intro to ethics). But once these elements are included, we can (roughly, as above) argue that this sort of egoism will imply strong reasons to reduce existential risk. Add to all of this Samuel Scheffler’s recent intriguing arguments (quick podcast version available here) that most of what makes our lives go well would be undermined if there were no future generations of intelligent persons. On his view, my life would contain vastly less well-being if (say) a year after my death the world came to an end. So obviously if Scheffler were right I’d have very strong reason to reduce existential risk.
We should also take into account moral uncertainty. What is it reasonable for one to do, when one is uncertain not (only) about the empirical facts, but also about the moral facts? I’ve just argued that there’s agreement among minimally plausible ethical views that we have strong reason to reduce existential risk – not only consequentialists, but also deontologists, virtue ethicists, and sophisticated egoists should agree. But even those (hedonistic egoists) who disagree should have a significant level of confidence that they are mistaken, and that one of the above views is correct. Even if they were 90% sure that their view is the correct one (and 10% sure that one of these other ones is correct), they would have pretty strong reason, from the standpoint of moral uncertainty, to reduce existential risk. Perhaps most disturbingly still, even if we are only 1% sure that the well-being of possible future people matters, it is at least arguable that, from the standpoint of moral uncertainty, reducing existential risk is the most important thing in the world. Again, this is largely for the reason that there are so many people who could exist in the future – there are trillions upon trillions… upon trillions. (For more on this and other related issues, see this excellent dissertation).
Of course, it is uncertain whether these untold trillions would, in general, have good lives. It’s possible they’ll be miserable. It is enough for my claim that there is moral agreement in the relevant sense if, at least given certain empirical claims about what future lives would most likely be like, all minimally plausible moral views would converge on the conclusion that we should try to save the world. While there are some non-crazy views that place significantly greater moral weight on avoiding suffering than on promoting happiness, for reasons others have offered (and for independent reasons I won’t get into here unless requested to), they nonetheless seem to be fairly implausible views. And even if things did not go well for our ancestors, I am optimistic that they will overall go fantastically well for our descendants, if we allow them to. I suspect that most of us alive today – at least those of us not suffering from extreme illness or poverty – have lives that are well worth living, and that things will continue to improve. Derek Parfit, whose work has emphasized future generations as well as agreement in ethics, described our situation clearly and accurately:
“We live during the hinge of history. Given the scientific and technological discoveries of the last two centuries, the world has never changed as fast. We shall soon have even greater powers to transform, not only our surroundings, but ourselves and our successors. If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period. Our descendants could, if necessary, go elsewhere, spreading through this galaxy…. Our descendants might, I believe, make the further future very good. But that good future may also depend in part on us. If our selfish recklessness ends human history, we would be acting very wrongly.” (From chapter 36 of On What Matters)
***
I’m grateful to various friends – especially Roger Crisp and Nick Beckstead – for helpful discussions.
Maybe humans are such wicked creatures that it’s better, from a certain utilitarian viewpoint, that we fail quickly. This might result in a curtailed but stable civilization, rather than our failing later in an even worse way that destroys everything.
I don’t really care enough about the future to volunteer my head, but just saying.
What if the suffering of other creatures was abated by the death of humanity? Is there more than just the well-being of humans at stake here?
I’d argue that this position is just as valid.
Can we agree on what is meant by “the world”, though?
Good post.
Just wondering if you think the importance of saving the world undercuts the importance of saving lives. E.G. wouldn’t my money do more good by investing in science programs than can reduce existential risk than giving to charities which reduce suffering? Should I stop giving to The Against Malaria Foundation and start making donations to NASA?
I do not agree that ‘it is not hard to see how reducing existential risk is easily the most important thing in the whole world.’ It is reasonable important but it should take second place to the immediate needs of the living. Much of what should be done to further this aim would of course also mitigate some of the existential risks, as it should help to avoid global wars, reduce the risk of pandemic, reduce carbon fuel dependency and control population growth. I would like to see considerably more emphasis placed upon the latter as it would solve numerous problems. Obviously we cannot all stop having children and those that do have them should consider what they should do to assist the next generation. However, they should not go further and attempt to ‘plan’ a risk free or any other future for nonexistent people. If far off future people blow themselves to atoms or are obliterated by a bogyman, so be it. It is completely irrational for us to seriously speculate and worry about such events. It is perhaps more than just irrational to speculate, as Nick Bostrom does in the ‘seminal paper’ you link, that ‘the potential for over ten trillion potential human beings is lost for every second of postponement of colonization of our supercluster.’
LOL
So… if you think that future needs are really important, then the best way of insuring against unforeseen circumstances is probably strong economic growth coupled with low population growth.
It always amazes me that do-gooders* of the effective altruism/cosmopolitan activist sort aren’t championing free trade agreements as the best for improving the living standards/life prospects of entire continents. Economically, it’s a no-brainer – sure the details matter; but in terms of overall weelfare benefits, those details are usually second order. To first order, nothing improves lives like open markets.
*I don’t mean that pejoratively. I just mean people who bang on about doing good.
My impression is that EA people are generally pro-free trade insofar as they have opinions on the topic. The question is more, “What can be done about it?” I note that EA people are unusually interested in open borders, which is arguably the most important and least open market (the international labor market) in the world.
EA person wrote: “…insofar as they have opinions on the topic.”
Well if their goal is to improve human welfare, and if they think efficient delivery of benefits matter, they ought to have opinions about trade, and strong ones. Trade is the reason for the massive improvement in human welfare in Asia over the last 40 or so years. The effects of trade far outweigh the effects of philanthropy.
If EA people are serious, they would realise that opposing rich country agricultural protectionism (esp the EU common agricultural policy) is probably the single most effective way to help people in poor countries. I appreciate that policy is something that EA folks have fought shy of discussing – it’s not something that most have competence in, and this particular set of policies are generally regarded as uncool by your fellow hipsters. It’s safer, and tamer, and much less effective, to pretend that aid matters more than trade.
I’m involved with effective altruism and agree that trade is important.
However, that does not mean that opposing (e.g. EU’s CAP) is the most useful or effective way to spend my time and money. I do object to CAP, but as ‘EA person’ asks: what can be done about it? There already seems to be plenty of opposition to it among politicians, but there are major political challenges associated with removing it or even changing it. I do know that some EA people advocate going into politics in order to increase their chances of making a difference.
Effective altruism isn’t just about finding the most valuable outcome, it is also about taking into account the likelihood of success. Or rather, it’s about finding the biggest *marginal* impact. If I donate to the Against Malaria Foundation, I will almost certainly help prevent some cases of malaria, and probably save some lives. If I campaign against CAP….will I actually achieve anything?
Matt – thanks for the reply. It’s an interesting line to take. It’s true that an individual’s support for a policy is unlikely to make a difference. But one could use the same argument for inaction on any number of policy positions – your support will make no practical difference to any number of good causes, from human rights issues to (especially) collective action problems is just as likely to be indecisive – do you consistently ignore political issues on this basis and leave them to others? (isn’t this a form free-riding?)
reducing existential risks and not making the world a wreck/cause human extinction is sth hard for human beings to disagree with. but i’m slightly worried about the view that this is ‘the *most* important thing in the world’, since you would need to compare what it costs to reduce risk/improve (maybe) the lives of future people with what maximizes life of existing people? – and it needn’t be avoiding sth obviously catastrophic such as mass war, could be small dailylife acts such as eating more ethically (whatever that means) but having less pleasure (maybe much less pleasure) in it…? in cases when reducing future risks and maximazing the flourishing of present life are in conflict, it seems uncertain to me that reducing risks are of primary importance.
ps. btw i feel this kind of discourse is overall very anthropocentric and just a particular case of cosmological pleading – but i also realise that it’s natural for human beings to care about their own species’ survival more than anything else…
I have no children and I have no desire to have children. I feel even less desire to ensure that other people’s grand-grand-grandchildren come into existence.
When I look at what people actually do and care about, I see a lot of vanity + considerable violence and pain.
When I imagine a future where everybody just stops breeding, and no future generation has to feel pain and die, I feel that is more attractive than a bigger future with all the inevitable warfare and torture.
You would have to do better than link to Steven Pinker to convince me that the future is worth fighting for.
Aunt Hill wrote: “When I look at what people actually do and care about, I see a lot of vanity + considerable violence and pain.”
If you care to look you’ll also see a lot of love, kindness and sacrifice. But obviously you’re free to add it up however you like. But I think the idea you propose – just running down the stock of human beings through nonreplacement – is probably the most coherent strategy that a full-blown political scepticism could articulate. It would still require one (massive) collective action problem to be solved. And most political sceptics ought to be reluctant – on principle – to sanction enforcement of that behavioural imperative.
Hi Theron, thank your for this post. I don’t really understand what you mean by “save the world” in your conclusion that
First, I don’t see what existential risk you have in mind and under which condition does the actual world qualify as being saved relative to that risk. Is it ok for humanity to gradually cease to exist after evolving into a species distinct from Homo Sapiens Sapiens? Is it ok for humanity to abruptly cease to exist if that was so as to avoid agonizing pain caused by a gene now constitutive of our DNA?
Second, given the way your present your argument, I don’t see how you can answer the question of what it is to save the world in a way that makes your conclusion that we should save the world either non-trivial or questionable:
– If what it takes to save the world is to ensure that, at any time, the level of well-being of every individual (identified in a some way) is above a certain value, then you conclusion seems rather trivial for any version of consequentialism on which well-being is the main morally relevant feature of the world — or has a feature that is the source of all morally relevant features of the world. But it is objectionable for any view that does not buy into this meta-ethical picture of the world.
– If what it takes to save the world is to ensure that, at any time, the morally relevant interests (of bearers identified in some way) are satisfied, then your conclusion seems to be a description, in meta-ethical terms, of what the aim of any normative theory is — hence of what makes a normative theory normative in the first place.
In either case, it seems that the conclusion is either a trivial consequence of a particular, thus questionable view of morality, or that it is simply a restatement of what it is for a theory to be a normative theory.
I don’t see how you can avoid this problem unless you go into much more details about the existiental risks that are supposed to threaten mankind, and on how we could avoid or prevent them.
“If the happiness or well-being of possible future people is just as important as that of people who already exist, and if they would have good lives, it is not hard to see how reducing existential risk is easily the most important thing in the whole world.”
Am I the only one to see a striking similarity between this argument and one of the classic theological arguments against contraception ?
Ie, the refusal to maximise fertilisation of sperm and ovocytes is depriving the world of millions of future happy people. (Of course not all will be happy, but even if 0.00027% of the trillions and trillions have happy lives, all consequentialists must agree that the world will, in total, be happier… and that our moral duty is clearly therefore to maximise procreation)
But I suppose that with a title like “Saving the World”, I should not be surprised by the content.
Only if all the other lives are just neutral but not negative.
Yes indeed, these silly figures of ‘saving’ trillions of people a second sounds like some kind of religious nonsense. (I wonder whether they calculating how many of them can get on the pinhead?). I am reminded how the “science” of eugenics was promoted as a new religion by Galton and his followers to convince the masses of its veracity.
Dear all,
Thank you so much, and I apologize for the delay in responding (was traveling). Here are a few quick remarks which pick up on common themes found your helpful and interesting comments on my post:
I am not concerned with humanity per se, but with sentient life – and intelligent life in particular (at least for instrumental reasons, but possibly also because it matters more in some deeper sense). So I do not believe I am guilty of any kind of *species* chauvinism, though I am happy to confess that I am what you might call a “sentientist” – someone who thinks that only sentient beings are capable of well-being in a morally relevant sense.
The kind of catastrophe I had in mind was one that would, within a short period of time, wipe out all sentient life on Earth – this could be the result of a sufficiently crazy nuclear war or a large enough asteroid colliding with our planet.
I agree that it is a difficult empirical question whether most lives are worth living, and whether they likely will be in the distant future. Indeed, citing Pinker is insufficient evidence on this score. And things are substantially more complicated if we take into account the well-being of (wild) animals. My hunch is that the chance that things will go extraordinarily well in the distant future is greater than the chance that things will go proportionately badly, but I don’t have the best evidence for this. If others have relevant evidence either way on this score, I would love to see it. Still, as I said in the post, there’d be moral agreement in the sense I’m concerned with here if, given certain empirical details, all minimally plausible moral views would converge.
Some of the responses seemed to suggest that I am claiming both that the well-being of possible future persons matters (equally to that of presenting existing persons), and that reducing existential risks is the *most* important thing to focus on. I should stress that, while I believe both of these claims are very much defensible (and on non-theological grounds!), it was not my intention to defend them in the post. Instead, the claim I sought to put forward was that (given certain empirical assumptions) all minimally plausible moral views – including those which accord *no* moral significance to the well-being of possible future persons – converge on the claim that reducing existential risks is “merely” *very* important. Exactly *how* important it is, relative to other things we could be doing, is something I left open. I agree that claim is a bit vague, but it still seems to me worth saying, and it may be surprising to those who hadn’t yet considered the points I raised in my post.
Comments are closed.