Follow Rebecca on Twitter
Scientific discoveries about how our behaviour is causally influenced often prompt the question of whether we have free will (for a general discussion, see here). This month, for example, the psychologist and criminologist Adrian Raine has been promoting his new book, The Anatomy of Violence, in which he argues that there are neuroscientific explanations of the behaviour of violent criminals. He argues that these explanations might be taken into account during sentencing, since they show that such criminals cannot control their violent behaviour to the same extent that (relatively) non-violent people can, and therefore that these criminals have reduced moral responsibility for their crimes. Our criminal justice system, along with our conceptions of praise and blame, and moral responsibility more generally, all presuppose that we have free will. If science can reveal it to be an illusion, some of the most fundamental features of our society are undermined.
The questions of exactly what free will is, and whether and how it can accommodate scientific discoveries about the causes of our behaviour, are primarily theoretical philosophical questions. Questions of theoretical philosophy—for example, those relating to metaphysics, epistemology, and philosophy of mind and language—are rarely viewed as highly relevant to people’s day-to-day lives (unlike questions of practical philosophy, such as those relating to ethics and morality). However, it turns out that the beliefs that people hold about free will are relevant. In the last five years, empirical evidence has linked reduced belief in free will with an increased willingness to cheat,1 increased aggression and reduced helpfulness,2 and reduced job performance.3 Even the way that the brain prepares for action differs depending on whether or not one believes in free will.4 If the results of these studies apply at a societal level, we should be very concerned about promoting the view that we do not have free will. But what can we do about it? Continue reading
Podcast of Uehiro Seminar given by Gwen Adshead
‘The Bad Seed’ was a popular 1954 novel in which a well brought up young girl begins to manifest behaviour characteristic of a criminal psychopath. As the plot develops, the girl’s mother discovers that her own mother was a serial killer who was executed when she was herself a girl.
In this Uehiro Seminar, Gwen Adshead Forensic Psychotherapist at Bluebird House & Broadmoor Hospital explores this idea of the ‘bad seed’ using research into those who exhibit ‘callous and unemotional’ traits when children. In contrast to the theme of the novel, Dr Adshead points out that the causes of behaviour even for individuals who exhibit violent behaviour consistently both as children and adults are mediated by factors other than genetic predisposition. For example, there is a relationship between childhood physical abuse and neglect and delinquency and violence in later life. Dr Adshead argues that a more constructive approach to addressing violence in society might be to explore causes such as parenting rather than focusing disproportionate attention on the children. The lecture and discussion that follows raise fundamental issues to do with our attitude to genetic and other predictors of subsequent violence in adult life, the question of how resources should be allocated to address such problems, and how blame fits within this research framework.
You can listen to the podcast of the seminar here
Paul Troop and Sabrina Stewart
As Ben Goldacre reveals, the status quo in drug testing is nothing less than a scandal. Pharmaceutical companies are suppressing and blocking information, perfectly legally, that is causing adults and children to die. Reforming the system wouldn’t be too hard – a registry for all drug trials, before they begin, should be enough to get rid of most of the problems of suppressed publications. And compelling all pharmaceutical companies to put their trial results in the public domain would help tremendously.
All of us are present, past and future users of medicines. Getting rid of ineffective and dangerous drugs should be the great moral crusade of our time: this is far more important in terms of death and suffering, than the relatively few people who suffer from terrorism, violent crime, or even road accidents.
Imagine a political party that pledged to reform the drug testing system and do absolutely nothing else during their term in parliament. This would be a party it would be morally imperative to vote into office. Nothing else on offer in politics comes anywhere near.
The gene for internet addiction has been found! Well, actually it turns out that 27% of internet addicts have the genetic variant, compared to 17% of non-addicts. The Encode project has overturned the theory of ‘junk DNA‘! Well, actually we already knew that that DNA was doing things long before, and the definition of ‘function’ used is iffy. Alzheimer’s disease is a new ‘type 3 diabetes‘! Except that no diabetes researchers believe it. Sensationalist reporting of science is everywhere, distorting public understanding of what science has discovered and its relative importance. If media ought to try to give a full picture of the situation, they seem to be failing.
But before we start blaming science journalists, maybe we should look sharply at the scientists. A new study shows that 47% of press releases about controlled trials contained spin, emphasizing the beneficial effect of the experimental treatment. This carried over to subsequent news stories, often copying the original spin. Maybe we could try blaming university press officers, but the study found spin in 41% of the abstracts of the papers too, typically overestimating the benefit of the intervention or downplaying risks. The only way of actually finding out the real story is to read the content of the paper, something requiring a bit of skill – and quite often paying for access.
Who to blame, and what to do about it?
Professor Paul Keim, who chairs the US National Science Advisory Board for Biosecurity, recently recommended the censoring of research that described the mutations which led to the transformation of the H5N1 bird-flu virus into a form that can be transmitted between humans through droplets in breath (in ferrets, the number of mutations required is frighteningly small – five). His reason is simple: the research would be a recipe book for bioterrorists.
Keim thinks, however, that such censorship will only delay the inevitable. The information will come out sooner or later, but at least governments might by then have developed and prepared sufficient stocks of vaccine and set in place other emergency measures to deal with a global pandemic.
This is not quite closing the stable door after the horse is bolted. It’s more like closing the farm gate, in the knowledge that eventually the horse will jump the gate and escape.
But this raises the question of why the stable door wasn’t bolted in the first place. In an article in Nature, the leader of one of the teams has said that the research was necessary to show that those experts who doubt the human transmissibility of H5N1 are wrong. But given that there is controversy here, governments should of course be doing what they have been doing: treating the possibility as a serious risk. In response to the charge that the research is dangerous, this same research leader’s response is that there is already a threat of mutation in nature. But threats don’t cancel one another, and nature is not revealing its secrets to bioterrorists. The researchers claim that their research was necessary for the development of a vaccine. Keim’s view is that this is quite implausible, since the drugs the scientists were using against their virus were the same ones used against others. If he’s right, a natural conclusion to draw is that the scientists should never have done the research in the first place. And, having done it, they should have kept quiet about its details and destroyed the virus. They might indeed have informed the media of their overall result, or some carefully restricted set of other researchers of the details of their research. But then of course they wouldn’t have been able to publish those details in top scientific journals.
It was probably hard for the US National Science Advisory Board for Biosecurity (NSABB) to avoid getting plenty of coal in its Christmas stockings this year, sent from various parties who felt NSABB were either stifling academic freedom or not doing enough to protect humanity. So much for good intentions.
The background is the potentially risky experiments on demonstrating the pandemic potential of bird flu: NSABB urged that the resulting papers not include “the methodological and other details that could enable replication of the experiments by those who would seek to do harm”. But it can merely advice, and is fairly rarely called upon to review potentially risky papers. Do we need something with more teeth, or will free and open research protect us better?
Scientists have made a new strain of bird flu that most likely could spread between humans, triggering a pandemic if it were released. A misguided project, or a good idea? How should we handle dual use research where merely knowing something can be risky, yet this information can be relevant for reducing other risks?
by Charles Foster
There’s a huge number of journals publishing papers about ethics. Would the world be poorer, less ethically well adjusted, or less wise, if half of them went out of business? I doubt it. Quite the opposite, in fact. Less, famously, is more. Let’s face it: there’s little or nothing that’s new in most of the papers we write. We write them because we feel that we should; because our ‘career’ or our self-esteem demands it, or, more likely, because the department needs to put in a long list of publications in order to justify its existence. The fact of a publication is more important than its quality.
In order to justify the recycling of old thoughts, and to convince ourselves and our readers that we’re really smart, we write our papers in impenetrable jargon. Whole papers are devoted to saying in new technical language what was simply and accessibly said in words of one syllable in the 1930s. Academic enterprise has become a process of obfuscation. Continue reading
By Charles Foster
I have just finished writing a book about dignity in bioethics. Much of it was a defence against the allegation that dignity is hopelessly amorphous; feel-good philosophical window-dressing; the name we give to whatever principle gives us the answer to a bioethical conundrum that we think is right.
This allegation usually comes from the thoroughgoing autonomists – people who think that autonomy is the only principle we need. There aren’t many of them in academic ethics, but there are lots of them in the ranks of the professional guideline drafters, (look, for instance, at the GMC’s guidelines on consenting patients) and so they have an unhealthy influence on the zeitgeist.
The allegation is ironic. The idea of autonomy is hardly less amorphous. To give it any sort of backbone you have to adopt an icy, unattractive, Millian, absolutist version of autonomy. I suspect that the widespread adoption of this account is a consequence not of a reasoned conviction that this version is correct, but of a need, rooted in cognitive dissonance, to maintain faith with the fundamentalist notions that there is a single principle in bioethics, and that that principle must keep us safe from the well-documented evils of paternalism. Autonomy-worship is primarily a reaction against paternalism. Reaction is not a good way to philosophise. Continue reading
By Charles Foster
Most scientific journals require contributors to declare any conflict of interest.
But what about ethicists? We are much more ambitious and presumptuous in our aims than most scientists. We purport to tell our readers not which drug will reduce their blood cholesterol, or which type of plate is best for their radial fracture, but how best to live: how to make right decisions about things that matter far more than cholesterol; how to be the right sort of people. If we write good papers, amounting to more than newspaper opinion pieces, the papers support their conclusions with supposedly objective reasoning. We try to look scientific. And yet, try as we might, we can’t escape from our own histories and tendencies. If an ethicist has been sexually abused as a boy by a paedophilic priest, or forced to watch US evangelical TV, he’ll never be able to think that religion is anything but evil or ridiculous, and his articles will argue, with apparent but wholly fake objectivity, towards that conclusion. If the Jesuits got him before the age of 7, and etched the catechism into his subconscious rather than buggering him, the man they made out of the boy will be theirs for ever, in the Journal of Medical Ethics just as devoutly as in the confessional. And yet there’ll be not a whisper of a warning next to their papers. Those influences are likely to be far more determinative of the views expressed than any financial conflict of interest in a drug trial ever was. Everything about an ethicist’s life raises a potential conflict of interest. Continue reading