Skip to content

cognitive bias

What Fuels the Fighting: Disagreement over Facts or Values?

In a particularly eye-catching pull quote in the November issue of The Atlantic, journalist and scholar Robert Wright claims, “The world’s gravest conflicts are not over ethical principles or disputed values but over disputed facts.”[1]

The essay, called “Why We Fight – And Can We Stop?” in the print version and “Why Can’t We All Just Get Along? The Uncertain Biological Basis of Morality” in the online version, reviews new research by psychologists Joshua Greene and Paul Bloom on the biological foundations of our moral impulses. Focusing mainly on Greene’s newest book, Moral Tribes: Emotion, Reason, and the Gap Between Us and Them, Wright details Greene’s proposed solution to the rampant group conflict we see both domestically and internationally. Suggesting that we are evolutionarily wired to cooperate or ‘get along’ with members of groups to which we belong, Greene identifies the key cause of fighting as different groups’ “incompatible visions of what a moral society should be.”[2] And his answer is to strive for a ‘metamorality’ – a universally shared moral perspective (he suggests utilitarianism) that would create a global in-group thus facilitating cooperation.

Read More »What Fuels the Fighting: Disagreement over Facts or Values?

Invoking and banishing the dread demon “Lead”

Some researchers have fingered a surprising culprit for the crime wave that ended in the 1990s: lead, mainly from leaded fuel. We know that lead leads to development difficulties in children, and in country after country, lead emissions closely mirror the crime rate 23 years later – after those children have grown up into mature, irresponsible… Read More »Invoking and banishing the dread demon “Lead”

Terminator studies and the silliness heuristic

The headlines are invariably illustrated with red-eyed robot heads: “I, Revolution: Scientists To Study Possibility Of Robot Apocalypse“. “Scientists investigate chances of robots taking over“. “‘Terminator’ risk to be investigated by scientists“. “Killer robots? Cambridge brains to assess AI risk“. “‘Terminator centre’ to open at Cambridge University: New faculty to study threat to humans from artificial intelligence“. “Cambridge boffins fear ‘Pandora’s Unboxing’ and RISE of the MACHINES: ‘You’re more likely to die from robots than cancer‘”…

The real story is that the The Cambridge Project for Existential Risk is planning to explore threats to the survival of the human species, including future technological risks. And among them are of course risks from autonomous machinery – risks that some people regard as significant, others as minuscule (see for example here, here and here). Should we spend valuable researcher time and grant money analysing such risks?

Read More »Terminator studies and the silliness heuristic