Skip to content

existential risk

Guest Post: High Risk, Low Reward: A Challenge to the Astronomical Value of Existential Risk Mitigation

Written by David Thorstad , Global Priorities Institute, Junior Research Fellow, Kellogg College

This post is based on my paper “High risk, low reward: A challenge to the astronomical value of existential risk mitigation,” forthcoming in Philosophy and Public Affairs. The full paper is available here and I have also written a blog series about this paper here.

Derek Parfit (1984) asks us to compare two scenarios. In the first, a war kills 99% of all living humans. This would be a great catastrophe – far beyond anything humanity has ever experienced. But human civilization could, and likely would, be rebuilt.

In the second scenario, a war kills 100% of all living humans. This, Parfit urges, would be a far greater catastrophe, for in this scenario the entire human civilization would cease to exist. The world would perhaps never again know science, art, mathematics or philosophy. Our projects would be forever incomplete, and our cities ground to dust. Humanity would never settle the stars. The untold multitudes of descendants we could have left behind would instead never be born.Read More »Guest Post: High Risk, Low Reward: A Challenge to the Astronomical Value of Existential Risk Mitigation

The goodness of being multi-planetary

The Economist has a leader “For life, not for an afterlife“, in which it argues that Elon Musk’s stated motivation to settle Mars – making humanity a multi-planetary species less likely to go extinct – is misguided: “Seeking to make Earth expendable is not a good reason to settle other planets”. Is it misguided, or is the Economist‘s reasoning misguided?Read More »The goodness of being multi-planetary

Moral Agreement on Saving the World

There appears to be lot of disagreement in moral philosophy.  Whether these many apparent disagreements are deep and irresolvable, I believe there is at least one thing it is reasonable to agree on right now, whatever general moral view we adopt:  that it is very important to reduce the risk that all intelligent beings on this planet are eliminated by an enormous catastrophe, such as a nuclear war.  How we might in fact try to reduce such existential risks is discussed elsewhere.  My claim here is only that we – whether we’re consequentialists, deontologists, or virtue ethicists – should all agree that we should try to save the world.

Read More »Moral Agreement on Saving the World

Petrov Day

Today, 31 years ago, the human species nearly came to an end. Lieutenant colonel Stanislav Petrov  was the officer on duty in bunker Serpukhov-15 near Moscow, monitoring the Soviet Union early warning satellite network. If notification was received that it had detected approaching missiles the official strategy was launch on warning: an immediate counter-attack against the United States. International relations were on a hair trigger: just days before Korean Air Lines Flight 007 had been shot down by Soviet fighter jets, killing everybody onboard (including a US congressman). Kreml was claiming the jet had been on a spy mission, or even deliberately trying to provoke war.

Shortly after midnight the computers reported a single intercontinental missile heading towards Russia.

Read More »Petrov Day

Live from the shooting gallery: what price impact safety?

As I am writing this post, asteroid 2012 DA14 is sweeping past Earth, inside the synchronous orbit (in fact, I am watching it on live webcast). Earlier today, an unrelated impactor disintegrated above Chelyabinsk, producing some dramatic footage and some injuries from shattered glass due to the sonic boom. It might have been the largest impactor over the last century, clocking in at hundreds of kilotons. It is no wonder people are petitioning the White House to mount a vigorous planetary defense against asteroids and comets. But what is the rational and ethical level of defense we need against astronomical threats?

Read More »Live from the shooting gallery: what price impact safety?

Terminator studies and the silliness heuristic

The headlines are invariably illustrated with red-eyed robot heads: “I, Revolution: Scientists To Study Possibility Of Robot Apocalypse“. “Scientists investigate chances of robots taking over“. “‘Terminator’ risk to be investigated by scientists“. “Killer robots? Cambridge brains to assess AI risk“. “‘Terminator centre’ to open at Cambridge University: New faculty to study threat to humans from artificial intelligence“. “Cambridge boffins fear ‘Pandora’s Unboxing’ and RISE of the MACHINES: ‘You’re more likely to die from robots than cancer‘”…

The real story is that the The Cambridge Project for Existential Risk is planning to explore threats to the survival of the human species, including future technological risks. And among them are of course risks from autonomous machinery – risks that some people regard as significant, others as minuscule (see for example here, here and here). Should we spend valuable researcher time and grant money analysing such risks?

Read More »Terminator studies and the silliness heuristic

Ferretting out fearsome flu: should we make pandemic bird flu viruses?

Scientists have made a new strain of bird flu that most likely could spread between humans, triggering a pandemic if it were released. A misguided project, or a good idea? How should we handle dual use research where merely knowing something can be risky, yet this information can be relevant for reducing other risks?

Read More »Ferretting out fearsome flu: should we make pandemic bird flu viruses?