Terminator studies and the silliness heuristic
The headlines are invariably illustrated with red-eyed robot heads: “I, Revolution: Scientists To Study Possibility Of Robot Apocalypse“. “Scientists investigate chances of robots taking over“. “‘Terminator’ risk to be investigated by scientists“. “Killer robots? Cambridge brains to assess AI risk“. “‘Terminator centre’ to open at Cambridge University: New faculty to study threat to humans from artificial intelligence“. “Cambridge boffins fear ‘Pandora’s Unboxing’ and RISE of the MACHINES: ‘You’re more likely to die from robots than cancer‘”…
The real story is that the The Cambridge Project for Existential Risk is planning to explore threats to the survival of the human species, including future technological risks. And among them are of course risks from autonomous machinery – risks that some people regard as significant, others as minuscule (see for example here, here and here). Should we spend valuable researcher time and grant money analysing such risks?