Guest Post: High Risk, Low Reward: A Challenge to the Astronomical Value of Existential Risk Mitigation

Written by David Thorstad , Global Priorities Institute, Junior Research Fellow, Kellogg College

This post is based on my paper “High risk, low reward: A challenge to the astronomical value of existential risk mitigation,” forthcoming in Philosophy and Public Affairs. The full paper is available here and I have also written a blog series about this paper here.

Derek Parfit (1984) asks us to compare two scenarios. In the first, a war kills 99% of all living humans. This would be a great catastrophe – far beyond anything humanity has ever experienced. But human civilization could, and likely would, be rebuilt.

In the second scenario, a war kills 100% of all living humans. This, Parfit urges, would be a far greater catastrophe, for in this scenario the entire human civilization would cease to exist. The world would perhaps never again know science, art, mathematics or philosophy. Our projects would be forever incomplete, and our cities ground to dust. Humanity would never settle the stars. The untold multitudes of descendants we could have left behind would instead never be born.

This thought has driven many philosophers to emphasize the importance of preventing existential risks, risks of catastrophes involving “the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development” (Bostrom 2013, p. 15). For example, we might regulate weapons of mass destruction or seek to reduce what some see as a risk of extinction caused by rogue artificial intelligence.

Many philosophers think two things about existential risk. First, it is not only valuable, but astronomically valuable to do what we can to mitigate existential risk. After all, the future may hold unfathomable amounts of value, and existential risks threaten to reduce that value to naught. Call this the astronomical value thesis.

Second, increasingly many philosophers hold that humanity faces high levels of existential risk. In his bestselling book, The Precipice, Toby Ord (2020) puts the risk of existential catastrophe by 2100 at one in six: Russian roulette. Attendees at an existential risk conference at Oxford put existential risk by 2100 at nearly one in five (Sandberg and Bostrom 2008). And the Astronomer Royal, Martin Rees (2003), puts the risk of civilizational collapse by 2100 at fifty-fifty: a coinflip. Let existential risk pessimism be the claim that per-century levels of existential risk are very high.

Surely the following is an obvious truth: existential risk pessimism supports the astronomical value thesis. If we know anything about risks, it is that it is more important to mitigate large risks than it is to mitigate small risks. This means that defenders of the astronomical value thesis should be pessimists, aiming to convince us that humanity’s situation is dire, and opponents should be optimists, aiming to convince us that things really are not so bad.

In my paper, I argue that every word in the previous paragraph is false. At best, existential risk pessimism has no bearing on the astronomical value thesis. Across a range of modelling assumptions, matters are worse than this: existential risk pessimism strongly reduces the value of existential risk mitigation, often strongly enough to scuttle the astronomical value thesis singlehandedly. (See end notes for examples, and see the full paper for further details).

In the full paper, I explore a range of models and argue that there is only one viable way to reconcile existential risk pessimism with the astronomical value thesis. This is the time of perils hypothesis on which levels of existential risk are high now, but will soon drop to a permanently low level if only we survive the next few perilous centuries. However, I argue, the time of perils hypothesis is unlikely to be true, so there is likely an enduring tension between existential risk pessimism and the astronomical value thesis.

This tension has important philosophical implications. First, it means that unless more is said, many parties to debates about existential risk may have been arguing on behalf of their opponents. To many, it has seemed that a good way to support the moral importance of existential risk mitigation is to make alarmist predictions about the levels of existential risk facing humanity today, and that a good way to oppose the moral importance of existential risk mitigation is to argue that existential risk is in fact much lower than alarmists claim. However, unless more is said, matters are exactly the reverse: arguing that existential risk is high strongly reduces the value of existential risk mitigation, whereas arguing that existential risk is low strongly increases the value of existential risk mitigation.

Second, there has been a wave of recent support for longtermism, the doctrine that positively influencing the long-term future is a key moral priority of our time. When pressed to recommend concrete actions we can take to improve the long-term future of humanity, longtermists often point to existential risk mitigation. By the astronomical value thesis, longtermists hold, existential risk mitigation is very important. But this paper suggests an important qualification, since many longtermists are also pessimists about existential risk. As we have seen, existential risk pessimism may well be incompatible with the astronomical value thesis, in which case the value of existential risk mitigation may be too low to provide good support for longtermism.

End notes

The core modelling claim of the paper is that (1) at best, existential risk pessimism is irrelevant to the astronomical value thesis, and that (2) in most cases existential risk pessimism tells strongly against the astronomical value thesis. While full technical details are contained in the main paper, here are some models to illustrate claims (1) and (2).

On (1): To illustrate the best case, suppose that humanity faces a constant level of risk r per century. Suppose also that each century of existence has constant value v, if only we live to reach it. And suppose that all existential catastrophes lead to human extinction, so that no value will be realized after catastrophe. Then, it can be shown that the value of reducing in our century by some fraction f is f*v. In this model, pessimism has no bearing on the astronomical value thesis, since the starting level r of existential risk does not affect the value of existential risk mitigation. Moreover, the value of existential risk reduction is capped at v, the value of a single century of human life. Nothing to sneeze at, but hardly astronomical.

On (2): Making the model more realistic only serves to heighten the tension between pessimism and the astronomical value thesis. For example, suppose that centuries grow linearly in value over time, so that if this century has value v, the next century has value 2v, then 3v and so on. Keep the other modelling assumptions the same. Now, it can be shown that the value of reducing existential risk in our century by some fraction f is fv/r.

In this model, pessimism tells against the astronomical value thesis: if you think that existential risk is now 100 times greater than I think it is, you should be 100 times less enthusiastic about existential risk mitigation. Moreover, the value of existential risk reduction is capped at v/r. For the optimist, this quantity may be quite large, but not so for the pessimist. For example, if we estimate per-century risk r at 20%, then the value of existential risk is capped at five times the value of a single century – again, nothing to sneeze at, but not yet astronomical.

 

References

Bostrom, Nick, “Existential risk prevention as global priority,” Global Policy 4.1 (2013): 15-31.

Ord, Toby, The precipice (NY: Bloomsbury, 2020).

Parfit, Derek, Reasons and persons (Oxford: Oxford, 1984).

Rees, Martin, Our final our (NY: Basic Books, 2003).

Sandberg, Anders and Bostrom, Nick, “Global catastrophic risks survey,” Technical Report 2008-1 (2008), Future of Humanity Institute.

 

  • Facebook
  • Twitter
  • Reddit

6 Responses to Guest Post: High Risk, Low Reward: A Challenge to the Astronomical Value of Existential Risk Mitigation

  • Paul D. Van Pelt says:

    Very well explained. I am not proficient in math, but I think I mostly grasp the odds which, according to your synopsis here aren’t all that bad Personally, I prefer them to those attending Russian roulette—call me an optimist of this count. Neither am I well-versed on longtermism. From what little I have gleaned on that stance there are adherents as well as detractors. There is a nagging uncertainty with contingency. Similar in a practical sense to Heisenberg’s quantum pronouncement. With contingency though, it seems less likely we can accurately account for what it will/may consist of or when/where it manifests. So, if that is any way correct, it matters little whether one is or is not in the longtermist camp; whether, in fact that camp is right or wrong. Such are the devilish details of contingency. It entails its’ own agenda and no one can do more than make a best guess as to what that is.
    A little like St. Elmo’s fire, uh, maybe? One exception (are least): St. Elmo’s is more predictable. I need to read your paper.

  • Paul D. Van Pelt says:

    Errata:: last line of previous comments- (AT least)…
    On the “astronomical value of essential risk mitigation”: this is an interesting example of philosophic word salad, IMHO. Neither astronomy nor the cosmos have much to do with risks to our existence. WE, on the other hand, have almost everything to do with that.

    • David Thorstad says:

      Thanks Paul!

      Philosophical definitions can be a bit complex. Let’s unpack this one in steps.

      (1) Existential catastrophe: An existential catastrophe is, following Bostrom, an event involving ““the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development”.
      (2) Existential risk: An existential risk is a risk of existential catastrophe.
      (3) Existential risk mitigation: Some of our actions are aimed at reducing levels of existential risk. These are acts of existential risk mitigation.
      (4) Value of existential risk mitigation: To speak of the value of existential risk mitigation is to ask how good or bad it would be to take acts of existential risk mitigation. For charity to my opponents, I concern myself with the value of the best feasible acts of existential risk mitigation.
      (5) Astronomical value thesis: The astronomical value thesis says that “the best available options for reducing existential risk today have astronomical value.” This makes a claim about the value of existential risk mitigation, namely that it is very high.

      Note that “astronomical” is not here being used to say that risks have to do with astronomy, cosmology, or outer space. It’s rather being used to say something about the value of existential risk mitigation, namely that the value is very large.

  • Sommer says:

    Undeniably consider that that you stated. Your favorite justification appeared to be on the internet the simplest thing to remember of.

    I say to you, I certainly get irked whilst other people think
    about concerns that they just don’t recognize about.
    You managed to hit the nail upon the highest as smartly as outlined
    out the entire thing without having side effect , other people could take
    a signal. Will probably be again click to read more
    get more. Thanks

  • Paul D. Van Pelt says:

    Mr. Thorstad:
    Thanks for the primer on definitions. I was only half facetious regarding astronomy. I support the continuation of the human race, wholeheartedly. It is just that the more I read of things like AI; ‘conscious’/sentient AI; transhumanism; Panpsychism and the like, the less I am led to believe that anyone takes survival of human beings seriously. We are looking for that magic transition which ushers in a braver, newer world. If, and only if, we are doing things towards that transition, great! Science and technology evolve with intuition and speculation and I value clear, rational thinking at least half as much as the next person in the audience—so long as she is not asleep. Keep doing what you are doing. My best wishes are with you.
    PDV.

  • Paul D. Van Pelt says:

    Insofar as I am new to this blog, out of curiosity I went back several years, and found the same sorts of repetitive nonsense experienced,in the same temporal proximity as where I had read and commented then. I need not give examples—you know your history better than I. There are legitimate, I think, concerns over the survival of philosophy, both as academic discipline and time-honored tradition. From the early part of the twenty teens, there were indications of a lack of consciousness among people who were supposed to be thinkers. But, some of the questions being asked now are non-starters. Philosophy, and its’ professors are not supposed to be raising the children of the people who are sending those children to university. It is not, in my opinion, the job of those professors to be paragons of ethics and morality for their students. If we really expected this, we would figure out how to pay the teachers better. So, maybe that is also part of the problem? I never considered doing philosophy for a living. Too much time and money invested, too little return, even when others agree you know what you are talking about…and, there are those crabs in the bottom of the bucket. Maybe that is what is wrong with philosophy.

Recent Comments

Authors

Affiliations