cognitive bias

What Fuels the Fighting: Disagreement over Facts or Values?

In a particularly eye-catching pull quote in the November issue of The Atlantic, journalist and scholar Robert Wright claims, “The world’s gravest conflicts are not over ethical principles or disputed values but over disputed facts.”[1]

The essay, called “Why We Fight – And Can We Stop?” in the print version and “Why Can’t We All Just Get Along? The Uncertain Biological Basis of Morality” in the online version, reviews new research by psychologists Joshua Greene and Paul Bloom on the biological foundations of our moral impulses. Focusing mainly on Greene’s newest book, Moral Tribes: Emotion, Reason, and the Gap Between Us and Them, Wright details Greene’s proposed solution to the rampant group conflict we see both domestically and internationally. Suggesting that we are evolutionarily wired to cooperate or ‘get along’ with members of groups to which we belong, Greene identifies the key cause of fighting as different groups’ “incompatible visions of what a moral society should be.”[2] And his answer is to strive for a ‘metamorality’ – a universally shared moral perspective (he suggests utilitarianism) that would create a global in-group thus facilitating cooperation.

Continue reading

Invoking and banishing the dread demon “Lead”

Some researchers have fingered a surprising culprit for the crime wave that ended in the 1990s: lead, mainly from leaded fuel. We know that lead leads to development difficulties in children, and in country after country, lead emissions closely mirror the crime rate 23 years later – after those children have grown up into mature, irresponsible adults.

A nice story – only problem is, people aren’t very interested in it. We prefer to tell stories about actual human villains, morality tales with clear blame and praise and entertaining situations (contrast the amounts spent fighting terrorism versus road accidents). Lead causing crime just isn’t sexy.

So to combat this universal human tendency, that causes us to misdirect our efforts and our focus, I propose we should treat Lead as an human-like villain. In its oily lair, the demon Lead rubes its metallic hands together in glee, imagining the millions of children whose developments it is stunting, and the thousands of young men it tipped into criminality, and the wailing of their victims. It plots further increases of its empire of crime, and gnashes grey teeth in frustration as heroic regulator squeeze its powerbase out of the air, the fuel, and the water.

You should already feel your emotional priorities shifting. This alternative visions should enable us to give Lead the attention it deserves, in comparison with other lesser threats with more appealing stories. Use our story-biases in the service of good – we can feel the appropriate amount of joy when we triumph over Lead; emotions, not just reason, are needed to keep up our motivations in dealing wit these threats.

And then the demon can be joined in its dark imaginary lair by the vicious Vampire Malaria, the Zombie-Lord of the Road Traffic Accident, and the bloody Psychopathic Death Cult of Cardio-Vascular Diseases. To arms, good citizens of the world, against these sinister anthropomorphised and correctly prioritised threats!

Terminator studies and the silliness heuristic

The headlines are invariably illustrated with red-eyed robot heads: “I, Revolution: Scientists To Study Possibility Of Robot Apocalypse“. “Scientists investigate chances of robots taking over“. “‘Terminator’ risk to be investigated by scientists“. “Killer robots? Cambridge brains to assess AI risk“. “‘Terminator centre’ to open at Cambridge University: New faculty to study threat to humans from artificial intelligence“. “Cambridge boffins fear ‘Pandora’s Unboxing’ and RISE of the MACHINES: ‘You’re more likely to die from robots than cancer‘”…

The real story is that the The Cambridge Project for Existential Risk is planning to explore threats to the survival of the human species, including future technological risks. And among them are of course risks from autonomous machinery – risks that some people regard as significant, others as minuscule (see for example here, here and here). Should we spend valuable researcher time and grant money analysing such risks?

Continue reading