Skip to content

Would it be bad if humanity were to become extinct?

That’s (roughly) the topic of a panel held at Sydney’s Festival of Dangerous Ideas. It is a topical question, in this age of potentially catastrophic climate change. There is no realistic risk that climate change threatens life on this planet, after all,  but it could threaten human existence (not directly, but by triggering widespread conflicts over scarce resources). The Astronomer Royal, Martin Rees, has dubbed this our final century, envisaging other means whereby human existence could end. So: would it matter?

The question seems to parallel the question concerning the badness of personal death. Epicurus famously argued that death was nothing to be feared because where we are, death is not, and where death is, we are not. Roughly, the idea seems to be that death is not something that can matter for us, because for something to matter for us, we have to be around. So death – rather than dying – is not something to be feared. However, many philosophers think we are harmed by death: for instance, by the complete cessation of our capacities to achieve our goals.

I think that reflecting on the end of humanity gives some support to views according to which it is not death itself that matters; rather it is the cessation of some kind of ongoing project. Compare two different scenarios in which humanity comes to an end. In scenario 1, humanity comes to an end in 300 years time when a large asteroid collides with the Earth, causing immediate devastation and a long winter in which the remnants of humanity die off. In scenario 2, humanity comes to an end because we encounter and interbreed with space-faring  aliens. I think it is clear that scenario 2 is far preferable to scenario 1, and not just because scenario 1 involves suffering (indeed, if we remove the suffering from scenario 1 – the asteroid somehow triggers instant and painless death – 2 remains far preferable to 1). That suggests that what matters for us is not whether humanity comes to an end, but whether our current projects are in vain. If everything we strive for makes no difference, some kind of meaninglessness seems to threaten, but if our projects continue then they might matter beyond their more immediate effects.

In the scenario I painted as the preferable one, humanity dies out but is continuous with the new species that takes it place. I think we can even remove the continuity and yet have a scenario in which extinction doesn’t threaten meaninglessness. Suppose we die out – through asteroid strike, say – and yet our accumulated knowledge and culture survives (perhaps future alien archaeologists visit earth and retrieve it from our libraries, our computer networks, and so on). Suppose that it becomes part of a greater, say galactic, cultural conversation. Though my intuition is that this scenario is less preferable than the one in which humanity is itself absorbed into the galactic conversation, it is not one in which extinction is a catastrophe. To me, that suggests it is the cessation of our cultural – in the broadest sense of ‘culture’ – conversation that accounts for a great deal of the harm in species extinction.

Of course, one day humanity will die out, one way or another, and these reflections offer at best  small consolation: the heat death of the universe entails that all conversations must one irrevocably come to an end. So we must come to terms with our eventual extinction, and find some source of meaning elsewhere than in the participation in an indefinitely long cultural conversation.

Share on

15 Comment on this post

  1. It’s a rather lovely image that the tragedy of human extinction is not pain/suffering/etc but that an interesting conversation has come to an end. Reminds me of something from Waugh.*

    *Evelyn, that is, not Steve. Just to clarify for Australian contingent.

  2. I like the conversation idea. It also helps explain why being around for a shorter time might be worse than being around for a longer time.

    The heat death ends all conversation, but there can be a lot of conversation (which covers much – from actual conversation to meaningful projects across time) before entropy wins. There are somewhere around 10^80-10^110 irreversible computations that can be done with the mass-energy of our local supercluster if we use it wisely. It might well be that this potential cosmic conversation will contain meaningful insights that are both contingent upon past knowledge (i.e. what we can contribute now) and bring meaning to past and future states – whatever profound insights there might be in this future might perhaps bring meaning even in an entropic universe.

  3. Haven’t you missed out a third possibility, which’d explain the difference between your scenarios (1) and (2) quite straightforwardly – that (1) is bad because it’s bad for actual people, each of whom has projects, desires, and all the rest of it, and each of whom might suffer as part of the process of humanity being extinguished? That wouldn’t happen in (2), because people’d – presumably – enjoy the process of diluting their genetic humanity. (If breeding with the aliens was forced, of course, that’d be a different matter – but then it’s not easy to see why the loss of a characteristically human genome would be the biggest concern – or a concern at all, unless we’re assuming that its loss would be bad, which seems like begging the question.)

    Indeed, we don’t even have to hypothesise breeding with aliens. H. sapiens sapiens has existed for – let’s say – about 250 000 years. Before that, our ancestors were a different species. Medical science might mean that the depradations of natural selection aren’t quite as fast acting, but it’s still reasonable to imagine that the species will have evolved into another species at some point in the next million years or so. So H. sapiens sapiens will be lost to time. Should we fight to minimise the risk of that happening? It’s hard to see why.

    If your asteroid were discovered tomorrow, and scheduled to hit us next Tuesday, I can see that it would be bad for humanity qua currently existing humans; but I have to admit that I’d struggle to weep for the loss of the species qua species. It’s always struck me as a bit self-important for us, as a species, to think otherwise; maybe I’ve missed something, though.

    1. I considered a version of scenario 1 where suffering was eliminated, Iain. That was one of the harms you identified in 1; the other (implied by the reference to projects), is leaving uncompleted the strivings of individuals. If you think that a harm of death is this arbitrary cessation of individual projects! then why not collective projects? Recognising the existence of collective projects is compatible with any social ontology, by the way, including metaphysical individualism.

      My intuition (for what that’s worth) is that even if the suffering caused by a realistic catastrophe is more significant than the cessation of collective projects, the latter has some weight. An ecstatic death at a great age for each individual might not be bad for that individual, but there would be some less in all individuals undergoing such a death.

  4. Interesting post, Neil. Thank you.
    One brief comment : you write – “Of course, one day humanity will die out, one way or another, and these reflections offer at best small consolation…”
    I wonder, Neil, whether this doesn’t beg the question : only if we think human extinction is a bad thing will we need consoling. And I’m pretty much with Iain on this
    But I agree absolutely with you that if we want to find meaning in life, we have to look elsewhere. (Which no doubt begs other questions….).

  5. That suggests that what matters for us is not whether humanity comes to an end, but whether our current projects are in vain. If everything we strive for makes no difference, some kind of meaninglessness seems to threaten, but if our projects continue then they might matter beyond their more immediate effects.

    Thank your for this interesting post. There seems to be some paradox there:
    (i) To assess whether humanity’s project is in vain, there must be someone out there to assess it using some metrics that is intelligible to humanity.
    (ii) If the reason why humanity’s project ends is because humanity disappears, there is no guarantee that whoever out there available for assessing whether it was in vain will use some metrics that is intelligible to humanity; for humanity will not be there to confirm it.
    (iii) So there is no guarantee that humanity’s project will be assessable for being in vain or not after the end of humanity, and no guarantee that the assessment would or would have been intelligile to us.
    Does this argument leave anything left from the original question?

    1. The more general point seems to be: if what we are after is an evaluation of some kind of narrative we see ourselves as part of, we need the narrative to come to an end. But since we cannot know how the end of humanity relates to the end of the narrative we see ourselves as part of, we cannot know if this narrative is to come to end in the first place. So we have to either restrict our evaluation to the part of the narrative we will have contributed to, or refrain from trying to evaluate it. In the latter case, there is no question, and in the former case, the question is simply whether our contribution, regardless of its possible consequences, was and is good. But to ask this we don’t need to suppose that humanity will ever end, we restrict our evaluation as we please.

      1. We *know* that the narrative will come to an end: physics tells us that. In assessing whether that would be a good thing, we needn’t assume the position of someone outside the narrative. We can do it in imagination. Of course imagination has its limitations but if that’s the basis for the criticism, it motivate general scepticism. In deciding what to do, we need to evaluate counterfactuals (what would happen if I accepted this job offer? If I bought this house? If I had the salad?) and we do that by imagining the consequences of our actions. We do this all the time, and generally pretty successfully. Of course the kind of case we are considering here is quite distant from our actual experience, and our imaginative capacities may not be up to it. Some philosophers have worried that the methodology I am employing here – considering scenarios and asking how we feel about them – is highly unreliable because the scenarios are so distant to the kinds of things our cognitive capacities evolved to deal with. That’s a criticism I take seriously, but we are stuck with this kind of methodology for these questions.

        1. Thank you for your reply, Nelly. As I understand it you are rejecting premise (i), relying on the assumption that we can evaluate the narrative we see ourselves as contributing to from the inside and adjudicate whether it would have been in vain if we were to evaluate it when humanity disappears.

          So the way you (and possibly Martin Rees, whom I have not read) are setting up the question does not merely rely on the methodology mentioned in your reply, namely, by comparing counterfactual scenarios. I have no problem with this methodology. What my first reply points out, I believe, is that this methodology might not be very fruitful when you compare distant futures under their evaluative properties. Let me try to say why.

          When comparing distant future, you need to iterate counterfactual shifts; you need counterfactual scenarios constructed out of future scenarios [constructed of a scenario modelling the actual world]. Thus you need not merely consider what would or could have happened given the actual state of the world; you need to consider what would or could have happened given some future state of the world [given the actual world].

          Now, this might not be a problem when the properties and facts we want to keep track off across counterfactual shifs are stable properties, for exemple natural properties, fundamental properties, and all the properties that are entirely dependent upon them. But when you try to compare different distant futures under the aspect of their evaluative properties, you risk ending up with problems such as the one I raised in my first post. Let me try to make it even more explicit.

          Evaluative properties are not stable in the sense that their are: essentially relational (if x is good, then there is someone for whom x is good); essentially dependent on non-evaluative properties of organisms (if x is good, then it is in virtue of some non-evaluative property y).

          Now to cut a long story short: the more counterfactual shifts you make, the more specific you need to be when describing the scenarios whose evaluative properties you want to base your comparison on. If the counterfactual shifts significantly alter parameters pertaining to the ascription of value (i.e. for whom?; in virtue of what non-evaluative property?), the scenarios you are trying to compare are no longer commensurable under their evaluative properties, hence there is no intelligible comparison to be made at all.

          This is where premise (i) enters the stage. Premise (i) claims that the only way to guarantee commensurability of evaluative properties across distant futures is to have some witness (the same) in every distant future. This witness need not be a human if the distant futures we are considering all imply the end of humanity, but it must be similar to a human in the sense that it must be an organism whose existence entails that the structure of evaluative properties at every distant future is similar to the structure of evaluative properties at the actual world, thereby ensuring comparability.

          So without the hypothesis laid out in (i), the question “is humanity’s project F”, where “F” picks out an evaluative property, might not be meaningful.

          Now the other issue is that even if you build this hypothesis right into your description of all the distant futures, ensuring evaluative comparability, you (and Martin Rees) still need to tell us more about how the evaluative properties to be compared relate to humanity’s project: does the comparison depends on humanity’s project’s being resumable by non-humans? Does it depend on humanity’s project’s being carried on without being resumable (i.e. could it inspire a similar project and would this make it good for this reason)?

          But when you start asking such questions, you might as well stop trying to compare distant futures and just focus on the project per se, asking whether there really is one, what it depends on, and whether is it valuable for its own sake from the point of view of the present, actual world.

          This is why the initial question sounds to me to be entertaining, but hopeless. But maybe you tell me which part of the argument for (i) you would reject?

          1. Being a project of F is not an evaluative property; it is a descriptive property. The claim that F’s having projects is a necessary for F to have a meaningful life is of course a normative claim, but we don’t need to know much about F to evaluate it. If F believes that having projects is not necessary for a meaningful life then either F is wrong or F is not the kind of being that can have a meaningful life. While I accept the the claim that something is good only if it is good for someone, I think that we don’t need to know much about the kind of being it is to know, for some goods, whether more or less of it would be more or less good for an entity like that.

            1. Initial question was: “Would it be bad if humanity were to become instinct.” It turned into “Is humanity’s project in vain?” The use of the two terms in bold presupposes that some evaluative comparisons are to be made.

              I think you need more.

  6. Given Nick Bostrom’s eloquent computer simulation theory, that the Greisen-Zatsepin-Kuzmin cut off is a limitation that cosmic ray particles have when they are part of a simulation, and that Sylvester James Gates has discovered doubly-even self-dual error-correcting block code in the fabric of the cosmos; it is likely machines will survive human extinction and simulate humans of the past in the real future. Dr. Michio Kaku has posited that civilizations above Type III on the Kardashev scale can escape the heat death of their universe by traveling to other universes through holes in space. Civilizations consisting of machines could achieve that level of advancement before the heat death of their universe. So, the human narrative and our cultural conversation could be simulated ad infinitum long after real humans no longer exist in simulations that are indistinguishable from reality for the simulants philosophising about their illusory existence and imaginary death.

  7. Thanks for this very interesting post, Neil. You didn’t say much about suffering. I think it’s worth considering the claim that the badness of the suffering humanity will continue to experience, at least for a good while, outweighs the value of anything good about its continued existence.

    1. Shades of David Benatar, Roger? It is very hard to know what to make of the relevant research (especially if, like me, you are sceptical about the powers of introspection) but the work on subjective well-being appears to indicate that most people are pretty happy most of the time, both in terms of hedonic state and in terms of how they judge their lives to be going. If we can take that research at face value, the undoubted suffering of many may be outweighed by the wellbeing of the majority. It is also worth bearing in mind, of course, that people typically cling tenaciously to life, even those who are certainly suffering and have little prospect of significant improvement (again, it is not easy to know what to make of this phenomenon: perhaps it indicates a revealed rational preference for life, or perhaps it indicates nothing more than an evolutionarily programmed mechanism).

Comments are closed.