Skip to content

The goodness of being multi-planetary

The Economist has a leader “For life, not for an afterlife“, in which it argues that Elon Musk’s stated motivation to settle Mars – making humanity a multi-planetary species less likely to go extinct – is misguided: “Seeking to make Earth expendable is not a good reason to settle other planets”. Is it misguided, or is the Economist‘s reasoning misguided?

The article, after cheering on Musk for being visionary, says:

How odd, then, that Mr Musk’s motivation is born in part of a fear as misplaced as it is striking. He portrays a Mars colony as a hedge against Earth-bound extinction. Science-fiction fans have long been familiar with this sort of angst about existential risks—in the 1950s Arthur C. Clarke told them that, confined to Earth “humanity had too many eggs in one rather fragile basket.” Others agree. Stephen Hawking, a noted physicist, is one of those given to such fits of the collywobbles. If humans stick to a single planet, he warns, they will be sitting ducks for a supervirus, a malevolent artificial intelligence or a nuclear war that could finish off the whole lot of them at any time.

Claptrap. It is true that, in the long run, Earth will become uninhabitable. But that long run is about a billion years. To concern oneself with such eventualities is to take an aversion to short-termism beyond the salutary. (For comparison, a billion years ago the most complex creature on the planet was a very simple seaweed.) Yes, a natural or maliciously designed pandemic might kill billions. So might a nuclear war; at a pinch climate change might wreak similar havoc. But extinction is more than just unprecedented mass mortality; it requires getting rid of everyone. Neither diseases nor wars do that.

Is worry about existential risk misplaced?

Musk is not concerned about the biosphere’s survival, but humanity’s. (See the appendix for some more thinking about the value of the biosphere and long-term thinking.)

The natural risks for our species – supernovas, asteroids, supervolcanoes, natural pandemics, climate crashes – are likely small: the average mammalian species survives for about a million years, and humanity, being widely dispersed, numerous and adaptable, plausibly has a smaller risk than this average. So we should not think the natural risk is much higher than one in a million per year (which still on average equates to 7,200 dead per year, more than the annual death rate due to terrorism).

Anthropogenic risk is another matter: because of human activity we are facing threats from nuclear war, climate change, engineered pandemic, and plausibly some future technologies. The probability of these threats occurring is far above one in a million per year. Even if they were not extinction-level they could destroy current civilization and make survivors far more vulnerable to subsequent natural or anthropogenic risks. Worse, anthropogenic risks can be directed and intentional: natural risks are not designed to do maximal damage or circumvent mitigation, but humans fairly regularly develop plans and tools to achieve such ends.

It is also worth considering that we introduce new risks at a far higher rate than nature. A century ago nuclear weapons, CFCs, designer pathogens, gene drives, and methods for nudging asteroid orbits did not exist. We should not expect our current list of worries to be the final one: in coming centuries it is likely that more risks are introduced as we become more technologically capable.

The main reason to be concerned about existential risk is not probability but the immense disvalue of extinction. The death of the present humanity would be worse than any past disaster in terms of the death-toll. But it also precludes all future generations, the value they could have achieved, annuls all our preferences for the future and all human projects, and potentially removes all conscious observers from the universe. This matters both for consequentialist and non-consequentialist ethical outlooks, and it seems to be a robust ethical conclusion that reducing existential risk has an extremely high priority.

(There are some limits to how much sacrifice we should wish to perform in order to reduce the risk; clearly if we are reducing our life quality by caution so much that continued existence holds less value than what was gained by our extended survival expectancy, we have gone too far – but it could still be rationally and ethically right to do fairly demanding risk mitigation. We are certainly not anywhere close to these extreme levels today.)

Spending a few billion dollars to reduce existential risk hence seems realistic and rational. One can still argue about whether Mars settlement is the best choice to reduce the risk.

Space as a refuge: does it make sense?

Finding a proper motivation for space settlement has not been easy. While it would be good to have space power satellites, space industry or off-planet refuges, that does not mean that they are better than other options. Musk has for example argued against space solar power in favor of terrestrial solar power on economical grounds.

We could build refuges on Earth, which would protect us from some threats. How effective they would be has been debated. But the cost would be small compared to settling Mars, which is both remote and inhospitable. On the other hand, Mars would have natural advantages in isolation: spacecraft are natural quarantines, communication delays limit dangerous information transmission, weapons would have to be interplanetary.

Settling space may be more important than Mars: a Moon habitat would have nearly the same advantages. Asteroid mining and settlement would add advantages of dispersion and mobility. Mars just happens to be rich in some convenient resources.

Using the cause prioritization heuristic of looking for things with importance, tractability and neglectedness can be useful. Space refuges have some important properties mentioned above, but represent an improvement of endurance/resilience rather than risk avoidance or hazard management: improvements of these two factors could trump the resilience. However, such improvements are typically specific to specific risks (nuclear war, bioweapons, AI, whatever) while a refuge can be useful against even some unconsidered risks. Up until now space refuges have been largely neglected because they were intractable. Musk’s efforts are essentially aimed at turning the problem tractable. That meta-problem also looked fairly intractable until recently, but might now be changing.

In summary, space refuges are not crazy, but it is somewhat unclear if they are the most bang for the buck. Making them more tractable might eventually make them a rational choice: it is early days.

The moral claim

The article continues:

If worrying about imminent extinction is unrealistic, trying to hide from it is ignoble. At the margins, it is better that the best and brightest share Earth’s risks than have a way to run away from them. Dream of Mars, by all means, but do so in a spirit of hope for new life, not fear of death.

It is not given that it is ignoble to avoid a possible disaster by going somewhere else. Are villagers in Japan leaving the seaside to settle above the tsunami stones ignoble? Surely not, even if their former neighbors might find them both too cautious and their move an annoyance to village unity.

It is ignoble only to abandon others, and this seems to be the article’s reading of the Mars plans. It certainly fits in with various science fiction tropes, but it is not plausible given the actual plans. Getting to and settling Mars are obviously risky, and will take a long time until you can take business class to Mars and check in on Olympus Hilton. Also, you will be out on a frontier, cut off from the mainstream culture, Internet and the stock market by around 15 minutes and 300 million km.

It is not going to be the best and brightest that go to Mars, but the most risk-taking and driven. Surely they will be competent (or the Mars colony will fail), but terrestrial colonialism did not drain Europe of competence until the colonies had become developed nations themselves.

But let us assume Mars has turned into a second Silicon Valley or Monaco, full of the best and brightest (say, escaping stifling regulations on Earth to live in a libertarian transhumanist utopia). Is it worse that the Martians are not sharing the earthlings’ risks?

Threats that would destroy one world would no longer be existential threats. Even civilization-ending disasters would now have somebody to send relief. Assuming the risk per year is the same, the total risk to humanity has been at least halved.

There are going to exist joint risks. As the article notes, malevolent AI or alien invasions are unlikely to leave one world alone. Dispersion however does enable better survival and time to respond, especially since a settled Mars is likely just the start of settling other parts of the solar system. But even if there was no improvement against joint risks the reduction of one-world risks would be significantly valuable.

The problem in this scenario rather seems to be the suspicion that there is an unacceptable inequality of life conditions rather than unequal risk. But even if this was unavoidable (which seems unlikely) that inequality has to weigh morally more strongly than the moral value of the change in existential risk. Since such changes at least correspond to the value of life of a fraction of humanity (plus future value), even fairly large inequalities might be acceptable. Especially since everybody benefits equally from the risk reduction, insofar individuals gain a benefit of being part of a species less likely to go extinct.

Conclusions

In my view, concerns about existential risk are rational and could drive space settlement. It is not entirely clear that space is the best response to the overall risk profile (see the appendix below for some further considerations). But there is nothing ignoble about the vision even if there are better approaches available. In fact, we should applaud attempts at thinking big and long-term: as a civilization they make us better. We should also scrutinize and analyse them carefully. We are at risk, but we also have some time and brainpower.

Appendix: What is the rational degree of long-term thinking?

Suppose we did know about a costly intervention that would double the lifespan of the biosphere. Should we do it?

The EU spends about 0.7% of GDP on environmental efforts. To a large extend this is because we do not want to suffer the bad effects of environmental degradation ourselves, but at least a fraction (let’s say 10%) of this is for the sake of the environment itself – we would want to pay for this even if nobody visited the saved environment. If we imagine that this adds an extra year of good environment (pretty optimistic), we are willing to pay 0.07% GDP (€11.5 billion) per year. Given that the biosphere may last between half a billion and a billion years in the future, we should be willing to pay 10 exaeuro (€1019) for all those future rainforests, coral reefs, deserts and whatnot. That is somewhere north of a hundred thousand times of the current world economy.

If Musk were to terraform Mars that is somewhat equivalent to doubling the number of biosphere-years (lets ignore that Mars is smaller, that it might not last quite as long etc). The above argument implies that it would actually be cost-effective to do it if the total cost was less than 10 exaeuro. (See also this post for some other ways of reasoning about the value of entire planets)

Is that absurd?

One could say we should discount future values to avoid absurdities. But this means essentially cutting off the future beyond a certain point: with a 1% discount rate, the entire future beyond 1000 years has value near zero. One can argue for discounting based on uncertainty about what will happen, but this again seems problematic in terms of cutting off the future: we can still predict that whoever is around in the far future likely wants to be alive. There are also philosophical arguments that we should not discount future lives – they have intrinsic value unlike other goods. See section 5 of this paper for some review and discussion of the issues.

We may say that the problem here is not the value, but the confidence in that we right now know enough and can make decisions for the entire future. Waiting for future options can be smart, especially when we know we will likely be richer, more knowledgeable and wiser, in the future. It might hence be rational to work more on setting course and gaining resources now than dealing with the actual risks. However, uncertainty about when risks will occur and the diminishing marginal returns on mitigation work can give reason to act early.

When the stakes are enormous and early actions can change what we do long into the future, the value of understanding what we are doing will be correspondingly high. Hence we have a reason to think extremely long-term, including about how to evaluate and what to plan for the long-term. The above arguments suggest that this might have higher priority now than actually doing long-term existential risk mitigation like settling Mars. We may still want to do the short-term risk mitigation, especially of the larger, more obvious and more elastic risks. We may also want to improve humanity’s space abilities in any case for scientific, economic, tractability, and freedom of action reasons: they are valuable, and their costs are drawn from a different “account” than much of the risk mitigation.

Share on

12 Comment on this post

  1. I absolutely agree it’s stupid to colonize Mars for the sake of having a “refuge” from potential catastrophes that could befall Earth. There is literally nothing that could happen on Earth that will make it less hospitable to life than is Mars. Even after a full nuclear exchange, massive pandemic and your other favorite next-millenium doomsdays, the Earth will seem like paradise compared to Mars. If bad actors have the power to wipe out humans on Earth (including everyone in bunkers and submarines) then wiping out Mars colonies will be trivial. Delivering a few warheads there is not much more expensive than delivering them to terrestrial targets.

    These guys are trying to rationalize wanting to settle Mars, and they shouldn’t have to. Settling Mars is cool. And something being cool is sometimes a good reason to do it.

    For the record, I feel differently about interstellar colonization. Once we go interstellar then I do think we are genuinely safer, and colonizing Mars is reasonable practice run at doing the much harder thing.

    1. Of course, developing the tools for thriving on Mars might actually make Earth far more resilient. Having good artificial biospheres is a hedge against failure of the terrestrial biosphere. Developing enclosed climate-controlled self-sufficient habitats is perhaps more easy to motivate for space, but once we have them we have great refuges for Earth, and so on.

      I am not sure how much existential risk is a rationalisation in Musk’s case. He seems pretty sincere about that. No doubt coolness or the challenge also play a role, but does it really matter which motivation is the “true” one insofar the same end is achieved?

  2. If existential risks for humanity were “external” to the behaviour, actions and ways of life of its members, i.e. if it consisted for the most part in the “intrinsic” natural hostility of Earth towards the civilizational development of humanity, then extraterristrial colonization would be justified on the ground that, all things being equal, any species is morally entitled to seek survival. But last time I checked, this was not the case. So either Elon Musk is just overlooking the main source of mankind-specific existential risks, i.e. failure to cooperate and coordination our actions, behaviour and ways of life to both mitigage existential risks and to become resilient to those risks, or he thinks that “going offworld” will contribute positively to migitating, and becoming resilient to, such risks. But that seems absurd; there is no reason why moving to a greener pasture should help you get along with your neighbours if you have trouble getting on their wavelength in the first place.

    In light of this consideration, Elon Musk’s project looks like a pompous way of buckling under.

    1. “Any species is morally entitled to seek survival” – that is an interesting statement, but how do you support it? One can construct though-experiment species that would have rational or moral reasons not to seek survival (e.g. a species of anguished vampires that experiences lives not worth living by their own standards and has survival dependent on performing actions they regard as immoral).

      Going off-world will not directly improve coordination or cooperation, but neither would a pandemic vaccine. That space does not fix underlying factors is not a solid argument against it, unless the resources involved could clearly be more effectively spent on such factors.

      1. Looks like you missing the “all things being equal” qualification. Besides, moral reasons are reasons, hence have defeaters. Not so much interested in having here and now a meta-ethical discussion about defeaters for moral reasons, though.

        “…unless the resources involved could clearly be effectively spend on such factors”. Bingo. Social injustice, environmental crisis, infrastructure, you name it. But of course these don’t fit too well into the epic narrative broached by people like Musk, so it does not sell as much.

        In some respects people like Musk have their cake cut for them if they get us to take a particular view on the ‘lifeboat’ dilemma: abandon those who cannot make it to the lifeboat or risk saving no one? The particular view they need to fund their pseudo-scientific programs is that we pick the first horn, and their marketing strategy for getting us there is to show that existential risks threatening the Earth are likely to happen no matter what we do, and therefore that we are better off if we jump into the lifeboat. But given the considerations raised in my previous reply, i.e. that we carry the potential of *this very* existential risks wherever we go, the Musk’s lifeboat looks less like a way of escaping a shipwreck and more like a way of segmenting mankind into “those than mankind can afford to expose to the risk” and “those that mankind cannot afford to expose”.

        Now the problem is not only that there could be no moral standpoint from which it could in principle be decided who gets exposed and who gets away. The problem is more vicious; it is that, by spreading the belief that existential risks cannot be sufficiently mitigated on Earth and thus require going offworld, there will be a point in time where people like Musk will be creating more competition between people for making it into the “those that makind cannot afford to expose” category, thus increasing the very existential risks they are trying to combat.

        How ironic.

        Objection: Yes, but what if there is no need to choose, and all mankind make it to the lifeboat? What is the size of the lifeboat = the size of the wrecking ship?
        Reply: No point trading a ship for another if the passengers are the core of the issues. Better fix the passengers cooperation inclinations and values.

    1. While I think the article has a good point (I was myself alluding to it when I mention how out of the way Mars is), it seems rather bold to assume that this particular socio-economic prediction is solid enough to preclude space colonization. Indeed, this kind of prediction nearly seems to rule out colonizing America if you are an European.

      People have also done things that did not make economic sense yet through network externalities or other reasons became big things. Just consider open source software, Wikipedia or the global Internet, which I think most economists would have laughed at as actual possibilities if asked in the 70s: clearly such things would have no economic incentives to expand nor be able to get the necessary capital to build up something actually functional.

      It is very hard to get a plausible business model for space settlement. However, being an existential risk mitigation hobby project for a particular wealthy individual *is* a business model, if only a narrow and fragile one. But that might well be the wedge that enables other models to occur symbiotically.

      1. Here’s another piece I came across yesterday – http://slatestarcodex.com/2014/07/21/promising-the-moon/

        Non-economic motives like extinction risk mitigation or wanting publicity might tip the balance in favor of colonization if the balance is fairly close already. We can measure whether it is a close call by looking at much less hostile places like Antarctica. If you want to mitigate extinction risk, the colony needs to be economically autarkic. We have no Antarctic colony that is economically self-supporting, let alone self-sufficient. So realistically, we are nowhere remotely near being able to use space colonization for reducing extinction risk.

        I’d say one of the more useful avenues for reducing extinction risk is how we can reduce the risk to our countries from nuclear war – http://www.vox.com/2015/6/29/8845913/russia-war – Electing Donald Trump might help with that, since maybe he would kick the Baltic states out of NATO and help Russia feel more secure.

  3. The politics of existential risk get no attention in this post. Put simply: if humans base their entire social order on the avoidance or mitigation of existential risk, then they concede political power to those who determine the existential risk. The recent proposed referendum in California on killing homosexuals show what can go wrong. It is the justification that is relevant here: the good people of California are about to be slain by a wrathful God. Extinction, in other words.

    a) The abominable crime against nature known as buggery, called also sodomy, is a monstrous evil that Almighty God, giver of freedom and liberty, commands us to suppress on pain of our utter destruction even as he overthrew Sodom and Gomorrha.
    b) Seeing that it is better that offenders should die rather than that all of us should be killed by God’s just wrath against us for the folly of tolerating-wickedness in our midst, the People of California wisely command, in the fear of God, that any person who willingly touches another person of the same gender for purposes of sexual gratification be put to death by bullets to the head or by any other convenient method.

    The courts declared the proposed Sodomite Suppression Act “patently unconstitutional”, and directed the State of California to remove it from the ballot. However, we can be sure that in the future there will be others, religious or not, who appeal to the threat of extinction to justify specific policies. If extinction-avoidance is the supreme value for human society, then we offer those individuals and groups a quick path to power.

  4. I note that the question of whether human extinction is desirable is not considered here. The issue is usually associated with those who see more value in the biosphere that in humans, but there are also political motives to consider, such as harm derived from inequality. Although statistics don’t go back too far, maybe 200 years in some western countries, there does seem to be a long-term trend to increasing inequality. Projecting that far into the future, then presumably there will be horrific mass suffering due to inequality, while a tiny minority has almost unimaginable wealth and privilege. In such circumstances, it is arguable that it is better to stop the process, before it gets that far.

    That applies in principle to all negative long-term social trends. Anders Sandberg is a meliorist, and a meliorist perspective distorts the analysis. Assuming that things are getting better and better obviously leads to the conclusion, that humans should stick around and wait for the (assumed) future benefits.

    1. There are a couple of ways you can favor human extinction. You can believe, like I think David Benatar and some others do, that literally everyone’s life is bad. This seems completely senseless. OTOH, you can believe that it is better some lives do not exist, but other lives are worthwhile and the overall value of humanity is unclear. If you take this “soft” view, then it follows that you favor eugenics to prevent at least certain types of miserable or unworthy people from ever being born. But I cannot figure out who should be targeted in eugenics. So I think these views are all wrong; it is good that humanity continues.

      It is good, but I’m not sure how good. I think moral realism is wrong. What determines the correct discount rate has to be some theory for why institutions or people who support a rate will have the advantage, and why they will grow powerful and implement their views. I have not seen much discussion about this, and what I have seen has been malarky

  5. The ethics of this specific Mars project need to be considered, separately from the possible function of Mars as a refuge. Discussion of long-term survival versus extinction is inevitably speculative, but the politics and geopolitics of Mars colonisation are relevant issues already. They are relevant even though much of the suggested technology is unproven, and the proposed time frame of several decades unrealistic.

    Elon Musk is a US citizen, and he and his business firms are subject to US law. If he does succeed in setting up a permanent settlement on Mars, it will fall under US jurisdiction. What he is proposing is a de facto US colonisation of Mars. There is a treaty regulating extraterrestrial activities by states, and single-state colonisation would probably violate it. Even if the treaty does not specifically prohibit this action, American colonisation of Mars would be a justification for war against the United States. That is mainly because it implies exclusion of other states, and in turn that implies the readiness to use force again other states. If there is a US Mars colony, and a Chinese spacecraft tries to land beside it, the US will probably destroy the Chinese spacecraft. They will do that even faster, if it is a North Korean or Caliphate craft.

    There is a significant issue here, which gets little attention. The first entity to colonise space can monopolise all future colonisation. That is because a significant presence in space can result in envelopment – the ability to surround the earth, and prevent anyone else from leaving it. This is an exceptional case in geopolitics: envelopment is not possible at planetary level on earth, simply because the earth is a sphere. Any attempt to encircle, without direct conquest, leads to a two-hemisphere stalemate. That has only happened twice: at the height of Axis expansion in 1942-1943, and during the Cold War.

    If a state can colonise space before the others, block other attempts, and then massively expand its population and resources, it can overwhelm other states without direct conquest. If the future earth has a population of 8 billion, and the United States including planetary colonies has 800 billion, then other states and societies will be reduced to insignificance. This principle does not apply only to states, but also to religion and ideology. If the first colonisation of space is Islamic, then in principle all future human expansion will be too.

    If we leave the US government aside, and look only at Elon Musk, his organisations, and his active supporters, we can see that such considerations are implicit in his colonisation model. His motives are malicious. Firstly because he wants to establish an unjust society on Mars – a right-wing society on a liberal free-market model. Secondly, he wants to exclude others from Mars. We can infer that from his refusal to internationalise the project, to work with other states and organisations, and to submit the project to some form of international supervision. We can also infer that from the selectivity of the project:

    Musk told me this first group of settlers will need to pay their own way. ‘There needs to be an intersection of the set of people who wish to go, and the set of people who can afford to go,’ he said. ‘And that intersection of sets has to be enough to establish a self-sustaining civilisation. My rough guess is that for a half-million dollars, there are enough people that could afford to go and would want to go. But it’s not going to be a vacation jaunt. It’s going to be saving up all your money and selling all your stuff, like when people moved to the early American colonies.’
    Elon Musk interview, Aeon, 30 September 2014.

    Most of the earth’s population does not have that money. Those that do, live mainly in western OECD states. And since the first colonists must communicate with each other, we can also assume that they will be English speakers.

    Elon Musk is running an American Mars colonisation project, motivated by an American political ideology (active supporters of planetary colonisation are often libertarians). Unless and until he gives watertight guarantees, that this will not be at the expense of others, then there is enough reason to resist the project. It might be open to legal challenge in the International Court, but other states seem to have little interest – there are other conflicts to worry about. Probably political opposition, legal or illegal, is the only realistic way to stop the project.

Comments are closed.