A recent study has shown that a person’s implicit racial bias can be reduced if she spends some time experiencing her body as dark-skinned. Psychologists in Spain used an immersive virtual reality technique to allow participants to ‘see’ themselves with a different skin colour. They measured the participants’ implicit racial bias before and after the intervention, finding that the embodiment of light-skinned individuals in a dark-skinned virtual body at least temporarily reduced their implicit bias against people who are coded as ‘out-group’ on the basis of skin colour.
Implicit racial bias is an evolved, unconscious tendency to feel more positively towards members of one’s own race (one’s ‘in-group’) than towards members of a different race (members of an ‘out-group’). The bias can be (and was in this study) measured using a version of the implicit association test, which requires participants to quickly catagorise faces (black or white) and words (positive or negative) into groups. Implicit bias is calculated from the differences in speed and accuracy between categorising (white faces, positive words) and (black faces, negative words) compared to (black faces, positive words) and (white faces, negative words). Crucially, implicit racial bias has been shown to be uncorrelated with explicit racial bias – self-reports of negative racial stereotypes. This means that even those who are not consciously averse to people from other racial groups often demonstrate a deep-seated bias against them as an evolutionary hangover. Hearteningly, the authors of the study started from the idea that encoding people by race may be a reversible by-product of human evolution used to detect coalitional alliances. What their study confirmed is that immersive virtual reality provides a powerful tool for placing people into a different race ‘coalition’ by changing their body representation and consequently reducing their implicit aversion to the racial characteristics there represented. Continue reading
In this podcast of her recent lecture, Professor Jeanette Kennett explores the connections between the folk psychological project of interpretation, the reactive attitudes and responsibility, (podcast ). The first section argues that the reactive attitudes originate in very fast and to a significant extent, non-voluntary processes involving constant facial feedback. These processes allow for smooth interaction between participants and are important to the interpretive practices that ground intimate relationships as well as to a great many less intense interactions. She then examines cases of facial paralysis (Moebius Syndrome and Botox studies) to support the argument that when these processes are interrupted or impaired, the interpretive project breaks down and social relationships suffer.
But do failures of interpretation lead, as Strawson suggests, to the suspension of the reactive attitudes relevant to responsibility assessments? Prof Kennett suggests that in many important instances they do not, considering the cases of children who murder, alien cultures, and psychopaths. In the second part she examines the supposed constitutive relation between the reactive attitudes and responsibility.
Jeanette Kennett is Professor of Moral Psychology and Deputy Director of the Centre for Agency Values and Ethics at Macquarie University. She has published widely on moral cognition, moral and criminal responsibility, and impairments of agency. She is currently lead investigator on an Australian Research Council funded project on Addiction and Moral Identity and is also a chief investigator on an ARC project examining implicit persuasion in direct to consumer pharmaceutical advertising.
This seminar was co-hosted by The Oxford Centre for Neuroethics and the International Neuroethics Society
Follow Rebecca on Twitter
Scientific discoveries about how our behaviour is causally influenced often prompt the question of whether we have free will (for a general discussion, see here). This month, for example, the psychologist and criminologist Adrian Raine has been promoting his new book, The Anatomy of Violence, in which he argues that there are neuroscientific explanations of the behaviour of violent criminals. He argues that these explanations might be taken into account during sentencing, since they show that such criminals cannot control their violent behaviour to the same extent that (relatively) non-violent people can, and therefore that these criminals have reduced moral responsibility for their crimes. Our criminal justice system, along with our conceptions of praise and blame, and moral responsibility more generally, all presuppose that we have free will. If science can reveal it to be an illusion, some of the most fundamental features of our society are undermined.
The questions of exactly what free will is, and whether and how it can accommodate scientific discoveries about the causes of our behaviour, are primarily theoretical philosophical questions. Questions of theoretical philosophy—for example, those relating to metaphysics, epistemology, and philosophy of mind and language—are rarely viewed as highly relevant to people’s day-to-day lives (unlike questions of practical philosophy, such as those relating to ethics and morality). However, it turns out that the beliefs that people hold about free will are relevant. In the last five years, empirical evidence has linked reduced belief in free will with an increased willingness to cheat,1 increased aggression and reduced helpfulness,2 and reduced job performance.3 Even the way that the brain prepares for action differs depending on whether or not one believes in free will.4 If the results of these studies apply at a societal level, we should be very concerned about promoting the view that we do not have free will. But what can we do about it? Continue reading
Frej Klem Thomsen, ‘Rescuing Responsibility from the Retributivists – Neuroscience, Free Will and Criminal Punishment’ (Podcast)
Do advances in neuroscience threaten the idea of free will, and if so, what practical implications does this have, for instance when it comes to criminal responsibility and punishment? In a stimulating talk at the Uehiro seminar (the podcast of which is available here), Frej Klem Thomsen, assistant professor of philosophy at Roskilde University, discussed the answers that the prominent American neuroscientists Joshua Greene and Jonathan Cohen have proposed to those questions . Briefly put, Greene and Cohen predict that cognitive neuroscience will make it increasingly apparent to everyone that (as some philosophers have argued centuries ago already) there is no such thing as free will as commonly understood. This, they add, will shift the approach to punishment in criminal law from the current “retributivist” one to a consequentialist one – a change they also judge desirable, on the grounds that the current approach relies on intuitions they take to be scientifically untenable.
Wednesday 27th November, 5 – 7pm
Oxford Martin School
Old Indian Institute
34 Broad St (corner of Holywell and Catte Streets)
Oxford OX1 3BD
The Oxford Centre for Neuroethics & International Neuroethics Society are pleased to present a set of two Wellcome Lectures in Neuroethics for 2013:
Brain mechanisms of voluntary action: the implications for responsibility
Prof. Patrick Haggard
University College London
The irresponsible self: Self bias changes the way we see the world
Prof. Glyn Humphreys
Department of Experimental Psychology, Oxford University
By Julian Savulescu and Anders Sandberg
Vicky Pryce, wife of disgraced ex-MP Chris Huhne, is back in court this week after the jury trying her case was discharged last week having failed to reach a verdict on her charges of perverting the course of justice. In 2003, Pryce accepted Huhne’s speeding points, but is claiming a defence of marital coercion. In 10 questions to the judge, the first jury showed an alarming and deep lack of understanding. Questions included:
“Can a juror come to a verdict based on a reason that was not presented in court and has no facts or evidence to support it?”
They also showed the jury had apparently forgotten key concepts which were explained during the trial:
“Does this defence require violence or physical threats?”
“Can you define what is reasonable doubt?”
Following the jury’s discharge, the judge said the jury showed “absolutely fundamental deficits in understanding”, adding that he had never seen this in 30 years of presiding over criminal trials. In Pryce’s trial, the questions the jury asked after several days of deliberations raised alarm bells, but in another trial where a verdict was reached, we would never know what the standard of jury understanding or deliberation had been. Yet juries are asked to decide (in some countries) on matters of life or death.
The Pryce case may have been unusual, but in any trial, and particularly in complex fraud cases, juries are asked to juggle and compute vast amounts of information, and to retain it throughout the trial in order to make an informed decision at the end. We have argued in “The Memory of Jurors: Enhancing Trial Performance” and “Cognitive Enhancement in Courts” with Walter Sinnott-Armstrong, that cognitive enhancement, particularly memory enhancers should be made available to jurors. If this had been available in the Pryce case, would the jury have spent more time discussing the decision at hand, and less on (mis-)remembering the judge’s instructions on reasonable doubt or the definition of coercion? If we ask people to take on a civic duty we should offer them all the tools we have available to assist them in its completion.
According to a recent report in the New York Times, the United States government will soon announce plans to fund the Brain Activity Map. Modelled on the highly successful Human Genome Project, the Brain Activity Map is an effort to identify functional networks of neurons, possibly leading to a full understanding of how mental processes like perception and memory are physically distributed in the brain. The scientific and medical potentials, perhaps including new treatment of conditions like schizophrenia or autism, are fantastic. By developing monitoring techniques like calcium imaging, nanoparticle sensor detection, or synthetic-DNA chemical recording, neuroscientists hope to be able to trace the paths traveled by our thoughts and memories. Yet before setting off on this cartographic adventure, perhaps we ought first stop, and remind ourselves where we already are.
In a 2012 Neuron paper proposing the Brain Activity Map, a group of leading scientists briefly acknowledge some ethical worries, including “issues of mind-control, discrimination, health disparities, unintended short- and long-term toxicities…” This is a reasonable, if somewhat eclectic, list of concerns. But I would like to add one more. Brain-mapping, like gene-mapping, risks making us overconfident in our self-understanding. The better we come to understand our brains, the more tempting it will be to assume we understand our selves.
Think for a moment about the history of major advances in human-directed science: Darwinism, psychoanalytics, behaviourism, sociobiology, cybernetics, genomics. With each progression has come a deluge of sweeping assertions about the new completeness of our self-understanding, followed later by a far quieter admission that whatever else we may be, we are also mysteries. In the worst moments, our fleeting certitude fuelled attempts to reorganize societies along purportedly scientific lines, from racist eugenics to disastrous Marxist utopianism. Even when spared catastrophic miscalculation, we’ve still suffered coarsening reductions in public debate about human nature, where hopes and commitments were temporarily replaced by operant reinforcements or behavioural phenotypes.
The point here is not to deny the reality of scientific descriptions of humanity, nor to retreat into a neo-Romantic induced ignorance. The point is simply to sound a warning, to jot a note to ourselves in this relatively sober moment, before the allure of the scientifically novel begins to blindingly illuminate our horizons. Maps are awesomely seductive bearers of information, so simply compact and so seemingly complete. Mapped brains will be more potent still, enfolding the vanity of portraiture in the certainty of topography.
I’m aware that what I am articulating is not so much an argument as an anxiety. I have no simple take-home message to offer, no action plan or policy recommendation. Certainly we should not attempt to stop the sort of research offered by the Brain Activity Map. Rather, we should support it, fund it, train our children to carry it forward. The potential benefits, to theoretical knowledge and human well-being, are incredible. But there are costs, or at least risks. It would be best to reach first for a bit of preventive humility, a dash of recognition that there are limits on the self-understanding of even such an expert auto-empiricizer as homo sapiens. In Franz Joseph Gall’s original phrenological map, the brain area for Circumspection and Forethought was located right next to the brain area for Vanity.
Antonio De Salles, Professor of Neurosurgery – UCLA
Lincoln Frias, postdoct UFMG-Brazil, International Neuroethics Society
Jorge Moll, D’Or Institute-Brazil, International Neuroethics Society
Psychosurgery has a bad name. The destruction or disconnection of brain tissue to treat mental illness was brought into disrepute by controversial figures of the past, who performed lobotomies with poorly defined clinical indications and without respect to even the most basic surgical practices of asepsis and hemostasis. The procedures were irreversible, unsafe, and often done without adequate informed consent. In many cases the surgeries drastically reduced the patients’ well-being and autonomy. To avoid this, governments put in place stringent regulations on these procedures. Coupled with developments in psychopharmacology, this left psychosurgery only as a last resort for extreme cases. The moral problem is that the stereotypes and stigma evoked by this kind of treatment are largely inadequate given current technology.