The goodness of being multi-planetary
The Economist has a leader “For life, not for an afterlife“, in which it argues that Elon Musk’s stated motivation to settle Mars – making humanity a multi-planetary species less likely to go extinct – is misguided: “Seeking to make Earth expendable is not a good reason to settle other planets”. Is it misguided, or is the Economist‘s reasoning misguided?
The article, after cheering on Musk for being visionary, says:
How odd, then, that Mr Musk’s motivation is born in part of a fear as misplaced as it is striking. He portrays a Mars colony as a hedge against Earth-bound extinction. Science-fiction fans have long been familiar with this sort of angst about existential risks—in the 1950s Arthur C. Clarke told them that, confined to Earth “humanity had too many eggs in one rather fragile basket.” Others agree. Stephen Hawking, a noted physicist, is one of those given to such fits of the collywobbles. If humans stick to a single planet, he warns, they will be sitting ducks for a supervirus, a malevolent artificial intelligence or a nuclear war that could finish off the whole lot of them at any time.
Claptrap. It is true that, in the long run, Earth will become uninhabitable. But that long run is about a billion years. To concern oneself with such eventualities is to take an aversion to short-termism beyond the salutary. (For comparison, a billion years ago the most complex creature on the planet was a very simple seaweed.) Yes, a natural or maliciously designed pandemic might kill billions. So might a nuclear war; at a pinch climate change might wreak similar havoc. But extinction is more than just unprecedented mass mortality; it requires getting rid of everyone. Neither diseases nor wars do that.
Is worry about existential risk misplaced?
Musk is not concerned about the biosphere’s survival, but humanity’s. (See the appendix for some more thinking about the value of the biosphere and long-term thinking.)
The natural risks for our species – supernovas, asteroids, supervolcanoes, natural pandemics, climate crashes – are likely small: the average mammalian species survives for about a million years, and humanity, being widely dispersed, numerous and adaptable, plausibly has a smaller risk than this average. So we should not think the natural risk is much higher than one in a million per year (which still on average equates to 7,200 dead per year, more than the annual death rate due to terrorism).
Anthropogenic risk is another matter: because of human activity we are facing threats from nuclear war, climate change, engineered pandemic, and plausibly some future technologies. The probability of these threats occurring is far above one in a million per year. Even if they were not extinction-level they could destroy current civilization and make survivors far more vulnerable to subsequent natural or anthropogenic risks. Worse, anthropogenic risks can be directed and intentional: natural risks are not designed to do maximal damage or circumvent mitigation, but humans fairly regularly develop plans and tools to achieve such ends.
It is also worth considering that we introduce new risks at a far higher rate than nature. A century ago nuclear weapons, CFCs, designer pathogens, gene drives, and methods for nudging asteroid orbits did not exist. We should not expect our current list of worries to be the final one: in coming centuries it is likely that more risks are introduced as we become more technologically capable.
The main reason to be concerned about existential risk is not probability but the immense disvalue of extinction. The death of the present humanity would be worse than any past disaster in terms of the death-toll. But it also precludes all future generations, the value they could have achieved, annuls all our preferences for the future and all human projects, and potentially removes all conscious observers from the universe. This matters both for consequentialist and non-consequentialist ethical outlooks, and it seems to be a robust ethical conclusion that reducing existential risk has an extremely high priority.
(There are some limits to how much sacrifice we should wish to perform in order to reduce the risk; clearly if we are reducing our life quality by caution so much that continued existence holds less value than what was gained by our extended survival expectancy, we have gone too far – but it could still be rationally and ethically right to do fairly demanding risk mitigation. We are certainly not anywhere close to these extreme levels today.)
Spending a few billion dollars to reduce existential risk hence seems realistic and rational. One can still argue about whether Mars settlement is the best choice to reduce the risk.
Space as a refuge: does it make sense?
Finding a proper motivation for space settlement has not been easy. While it would be good to have space power satellites, space industry or off-planet refuges, that does not mean that they are better than other options. Musk has for example argued against space solar power in favor of terrestrial solar power on economical grounds.
We could build refuges on Earth, which would protect us from some threats. How effective they would be has been debated. But the cost would be small compared to settling Mars, which is both remote and inhospitable. On the other hand, Mars would have natural advantages in isolation: spacecraft are natural quarantines, communication delays limit dangerous information transmission, weapons would have to be interplanetary.
Settling space may be more important than Mars: a Moon habitat would have nearly the same advantages. Asteroid mining and settlement would add advantages of dispersion and mobility. Mars just happens to be rich in some convenient resources.
Using the cause prioritization heuristic of looking for things with importance, tractability and neglectedness can be useful. Space refuges have some important properties mentioned above, but represent an improvement of endurance/resilience rather than risk avoidance or hazard management: improvements of these two factors could trump the resilience. However, such improvements are typically specific to specific risks (nuclear war, bioweapons, AI, whatever) while a refuge can be useful against even some unconsidered risks. Up until now space refuges have been largely neglected because they were intractable. Musk’s efforts are essentially aimed at turning the problem tractable. That meta-problem also looked fairly intractable until recently, but might now be changing.
In summary, space refuges are not crazy, but it is somewhat unclear if they are the most bang for the buck. Making them more tractable might eventually make them a rational choice: it is early days.
The moral claim
The article continues:
If worrying about imminent extinction is unrealistic, trying to hide from it is ignoble. At the margins, it is better that the best and brightest share Earth’s risks than have a way to run away from them. Dream of Mars, by all means, but do so in a spirit of hope for new life, not fear of death.
It is not given that it is ignoble to avoid a possible disaster by going somewhere else. Are villagers in Japan leaving the seaside to settle above the tsunami stones ignoble? Surely not, even if their former neighbors might find them both too cautious and their move an annoyance to village unity.
It is ignoble only to abandon others, and this seems to be the article’s reading of the Mars plans. It certainly fits in with various science fiction tropes, but it is not plausible given the actual plans. Getting to and settling Mars are obviously risky, and will take a long time until you can take business class to Mars and check in on Olympus Hilton. Also, you will be out on a frontier, cut off from the mainstream culture, Internet and the stock market by around 15 minutes and 300 million km.
It is not going to be the best and brightest that go to Mars, but the most risk-taking and driven. Surely they will be competent (or the Mars colony will fail), but terrestrial colonialism did not drain Europe of competence until the colonies had become developed nations themselves.
But let us assume Mars has turned into a second Silicon Valley or Monaco, full of the best and brightest (say, escaping stifling regulations on Earth to live in a libertarian transhumanist utopia). Is it worse that the Martians are not sharing the earthlings’ risks?
Threats that would destroy one world would no longer be existential threats. Even civilization-ending disasters would now have somebody to send relief. Assuming the risk per year is the same, the total risk to humanity has been at least halved.
There are going to exist joint risks. As the article notes, malevolent AI or alien invasions are unlikely to leave one world alone. Dispersion however does enable better survival and time to respond, especially since a settled Mars is likely just the start of settling other parts of the solar system. But even if there was no improvement against joint risks the reduction of one-world risks would be significantly valuable.
The problem in this scenario rather seems to be the suspicion that there is an unacceptable inequality of life conditions rather than unequal risk. But even if this was unavoidable (which seems unlikely) that inequality has to weigh morally more strongly than the moral value of the change in existential risk. Since such changes at least correspond to the value of life of a fraction of humanity (plus future value), even fairly large inequalities might be acceptable. Especially since everybody benefits equally from the risk reduction, insofar individuals gain a benefit of being part of a species less likely to go extinct.
In my view, concerns about existential risk are rational and could drive space settlement. It is not entirely clear that space is the best response to the overall risk profile (see the appendix below for some further considerations). But there is nothing ignoble about the vision even if there are better approaches available. In fact, we should applaud attempts at thinking big and long-term: as a civilization they make us better. We should also scrutinize and analyse them carefully. We are at risk, but we also have some time and brainpower.
Appendix: What is the rational degree of long-term thinking?
Suppose we did know about a costly intervention that would double the lifespan of the biosphere. Should we do it?
The EU spends about 0.7% of GDP on environmental efforts. To a large extend this is because we do not want to suffer the bad effects of environmental degradation ourselves, but at least a fraction (let’s say 10%) of this is for the sake of the environment itself – we would want to pay for this even if nobody visited the saved environment. If we imagine that this adds an extra year of good environment (pretty optimistic), we are willing to pay 0.07% GDP (€11.5 billion) per year. Given that the biosphere may last between half a billion and a billion years in the future, we should be willing to pay 10 exaeuro (€1019) for all those future rainforests, coral reefs, deserts and whatnot. That is somewhere north of a hundred thousand times of the current world economy.
If Musk were to terraform Mars that is somewhat equivalent to doubling the number of biosphere-years (lets ignore that Mars is smaller, that it might not last quite as long etc). The above argument implies that it would actually be cost-effective to do it if the total cost was less than 10 exaeuro. (See also this post for some other ways of reasoning about the value of entire planets)
Is that absurd?
One could say we should discount future values to avoid absurdities. But this means essentially cutting off the future beyond a certain point: with a 1% discount rate, the entire future beyond 1000 years has value near zero. One can argue for discounting based on uncertainty about what will happen, but this again seems problematic in terms of cutting off the future: we can still predict that whoever is around in the far future likely wants to be alive. There are also philosophical arguments that we should not discount future lives – they have intrinsic value unlike other goods. See section 5 of this paper for some review and discussion of the issues.
We may say that the problem here is not the value, but the confidence in that we right now know enough and can make decisions for the entire future. Waiting for future options can be smart, especially when we know we will likely be richer, more knowledgeable and wiser, in the future. It might hence be rational to work more on setting course and gaining resources now than dealing with the actual risks. However, uncertainty about when risks will occur and the diminishing marginal returns on mitigation work can give reason to act early.
When the stakes are enormous and early actions can change what we do long into the future, the value of understanding what we are doing will be correspondingly high. Hence we have a reason to think extremely long-term, including about how to evaluate and what to plan for the long-term. The above arguments suggest that this might have higher priority now than actually doing long-term existential risk mitigation like settling Mars. We may still want to do the short-term risk mitigation, especially of the larger, more obvious and more elastic risks. We may also want to improve humanity’s space abilities in any case for scientific, economic, tractability, and freedom of action reasons: they are valuable, and their costs are drawn from a different “account” than much of the risk mitigation.