Today, 31 years ago, the human species nearly came to an end. Lieutenant colonel Stanislav Petrov was the officer on duty in bunker Serpukhov-15 near Moscow, monitoring the Soviet Union early warning satellite network. If notification was received that it had detected approaching missiles the official strategy was launch on warning: an immediate counter-attack against the United States. International relations were on a hair trigger: just days before Korean Air Lines Flight 007 had been shot down by Soviet fighter jets, killing everybody onboard (including a US congressman). Kreml was claiming the jet had been on a spy mission, or even deliberately trying to provoke war.
Shortly after midnight the computers reported a single intercontinental missile heading towards Russia.
As I am writing this post, asteroid 2012 DA14 is sweeping past Earth, inside the synchronous orbit (in fact, I am watching it on live webcast). Earlier today, an unrelated impactor disintegrated above Chelyabinsk, producing some dramatic footage and some injuries from shattered glass due to the sonic boom. It might have been the largest impactor over the last century, clocking in at hundreds of kilotons. It is no wonder people are petitioning the White House to mount a vigorous planetary defense against asteroids and comets. But what is the rational and ethical level of defense we need against astronomical threats?
The headlines are invariably illustrated with red-eyed robot heads: “I, Revolution: Scientists To Study Possibility Of Robot Apocalypse“. “Scientists investigate chances of robots taking over“. “‘Terminator’ risk to be investigated by scientists“. “Killer robots? Cambridge brains to assess AI risk“. “‘Terminator centre’ to open at Cambridge University: New faculty to study threat to humans from artificial intelligence“. “Cambridge boffins fear ‘Pandora’s Unboxing’ and RISE of the MACHINES: ‘You’re more likely to die from robots than cancer‘”…
The real story is that the The Cambridge Project for Existential Risk is planning to explore threats to the survival of the human species, including future technological risks. And among them are of course risks from autonomous machinery – risks that some people regard as significant, others as minuscule (see for example here, here and here). Should we spend valuable researcher time and grant money analysing such risks?
Scientists have made a new strain of bird flu that most likely could spread between humans, triggering a pandemic if it were released. A misguided project, or a good idea? How should we handle dual use research where merely knowing something can be risky, yet this information can be relevant for reducing other risks?