It was probably hard for the US National Science Advisory Board for Biosecurity (NSABB) to avoid getting plenty of coal in its Christmas stockings this year, sent from various parties who felt NSABB were either stifling academic freedom or not doing enough to protect humanity. So much for good intentions.
The background is the potentially risky experiments on demonstrating the pandemic potential of bird flu: NSABB urged that the resulting papers not include “the methodological and other details that could enable replication of the experiments by those who would seek to do harm”. But it can merely advice, and is fairly rarely called upon to review potentially risky papers. Do we need something with more teeth, or will free and open research protect us better?
Scientists have made a new strain of bird flu that most likely could spread between humans, triggering a pandemic if it were released. A misguided project, or a good idea? How should we handle dual use research where merely knowing something can be risky, yet this information can be relevant for reducing other risks?
by Charles Foster
There’s a huge number of journals publishing papers about ethics. Would the world be poorer, less ethically well adjusted, or less wise, if half of them went out of business? I doubt it. Quite the opposite, in fact. Less, famously, is more. Let’s face it: there’s little or nothing that’s new in most of the papers we write. We write them because we feel that we should; because our ‘career’ or our self-esteem demands it, or, more likely, because the department needs to put in a long list of publications in order to justify its existence. The fact of a publication is more important than its quality.
In order to justify the recycling of old thoughts, and to convince ourselves and our readers that we’re really smart, we write our papers in impenetrable jargon. Whole papers are devoted to saying in new technical language what was simply and accessibly said in words of one syllable in the 1930s. Academic enterprise has become a process of obfuscation. Continue reading
By Charles Foster
I have just finished writing a book about dignity in bioethics. Much of it was a defence against the allegation that dignity is hopelessly amorphous; feel-good philosophical window-dressing; the name we give to whatever principle gives us the answer to a bioethical conundrum that we think is right.
This allegation usually comes from the thoroughgoing autonomists – people who think that autonomy is the only principle we need. There aren’t many of them in academic ethics, but there are lots of them in the ranks of the professional guideline drafters, (look, for instance, at the GMC’s guidelines on consenting patients) and so they have an unhealthy influence on the zeitgeist.
The allegation is ironic. The idea of autonomy is hardly less amorphous. To give it any sort of backbone you have to adopt an icy, unattractive, Millian, absolutist version of autonomy. I suspect that the widespread adoption of this account is a consequence not of a reasoned conviction that this version is correct, but of a need, rooted in cognitive dissonance, to maintain faith with the fundamentalist notions that there is a single principle in bioethics, and that that principle must keep us safe from the well-documented evils of paternalism. Autonomy-worship is primarily a reaction against paternalism. Reaction is not a good way to philosophise. Continue reading
By Charles Foster
Most scientific journals require contributors to declare any conflict of interest.
But what about ethicists? We are much more ambitious and presumptuous in our aims than most scientists. We purport to tell our readers not which drug will reduce their blood cholesterol, or which type of plate is best for their radial fracture, but how best to live: how to make right decisions about things that matter far more than cholesterol; how to be the right sort of people. If we write good papers, amounting to more than newspaper opinion pieces, the papers support their conclusions with supposedly objective reasoning. We try to look scientific. And yet, try as we might, we can’t escape from our own histories and tendencies. If an ethicist has been sexually abused as a boy by a paedophilic priest, or forced to watch US evangelical TV, he’ll never be able to think that religion is anything but evil or ridiculous, and his articles will argue, with apparent but wholly fake objectivity, towards that conclusion. If the Jesuits got him before the age of 7, and etched the catechism into his subconscious rather than buggering him, the man they made out of the boy will be theirs for ever, in the Journal of Medical Ethics just as devoutly as in the confessional. And yet there’ll be not a whisper of a warning next to their papers. Those influences are likely to be far more determinative of the views expressed than any financial conflict of interest in a drug trial ever was. Everything about an ethicist’s life raises a potential conflict of interest. Continue reading
by Alexandre Erler
Satoshi Kanazawa is currently in the news – see e.g. these articles in the Daily Mail, The Australian and Psychology Today. An evolutionary psychologist at the London School of Economics, Kanazawa has just published a new article in the journal Intelligence (Kanazawa 2011) in which he argues, in continuity with his previous research, that beautiful people tend to be more intelligent than plainer ones (especially if they are men). Only now he is arguing that this correlation may be much stronger than we previously thought. His conclusion is based on data from two studies, conducted respectively in the UK and the US, which tested the intelligence of children and young teenagers but also rated their level of physical attractiveness. In the British study, attractive respondents had a mean IQ about 13 points higher than unattractive ones, and the beauty-intelligence correlation turned out to be of a similar magnitude to that between intelligence and education.
Matthew L Baum
Round 1: Baltimore
I first heard of the Malleus Maleficarum, or The Hammer of Witches, last year when I visited Johns Hopkins Medical School in Baltimore, MD, USA. A doctor for whom I have great respect introduced me to the dark leather-bound tome, which he pulled off of his bookshelf. Apparently, this aptly named book was used back in the day (published mid-1400s) by witch-hunters as a diagnostic manual of sorts to identify witches. Because all the witch-hunters used the same criteria as outlined in The Hammer to tell who was a witch, they all –more or less- identified the same people as witches. Consequently the cities, towns, and villages all enjoyed a time of highly precise witch wrangling. This was fine and good until people realized that there was a staggering problem with the validity of these diagnoses. Textbook examples (or Hammer-book examples) these unfortunates may have been, but veritable wielders of the dark arts they were not. The markers of witchcraft these hunters agreed upon, though precise and reliable, simply were not valid.
A new study from the Mayo clinic in the United States points to a frequent problem in certain types of medical research. When healthy volunteers or patients with a given condition take part in research studies they may have brain scans, CAT scans, blood tests or genetic tests that they wouldn’t otherwise have had. These tests are not done for the benefit of the individual, they are designed to answer a research question. But sometimes, quite often according to the authors of this new study, researchers may spot something on the scan that shouldn’t be there, and that could indicate a previously undiagnosed health condition. These ‘incidental findings’ generate an ethical dilemma for researchers. Should they tell the research participant about the shadow seen on their scan? Do they have an obligation to reveal to a research participant that they have found them to carry a gene increasing their risk for breast cancer, or Alzheimer’s disease? There is much agonising by ethics committees, ethicists and researchers about the problem of incidental findings, but there is a simple way of avoiding the problem. Anonymise research databases and tests so that there is no possibility of determining which participant has the breast cancer gene, or the lump in their kidney.
Yesterday Richard Ashcroft, Professor of Bioethics at Queen Mary College, London, wrote in a Facebook update: ‘I am fed up with being asked to come into science/medicine projects, add a bit of ethics fairy dust, usually without getting any share of the pie, just to shut reviewers up. I am not doing it any more. If they think we are important, treat us with respect. Otherwise, get lost.’
Lots of people liked this. So do I. Ethicists have for too long been the invisible but essential backroom boys and girls of biomedicine; patronised by the practitioners of ‘hard’ science; seen as unimaginative but powerful bureaucrats who have to be kept sweet; as despised social scientists who wield rubber stamps made essential by other zeitgeist-dictating social scientists who want to keep their woolly-headed chums in a job; as factotums who don’t deserve to have their names on the papers any more than the temp who does the photocopying. Why is this?
After the September 11 terrorist attacks, the Bush administration redefined acts that were previously recognised as torture and thus illegal as ‘enhanced interrogation techniques’ (EITs). From then on subjecting detainees to, for example, forced nudity, sleep deprivation, waterboarding and exposure to extreme temperatures could be legal. The line between torture and EITs is a fine one: the classification depends on the level of pain experienced.
A report issued by the advocacy group ‘Physicians for Human Rights’ has revealed that to ensure that the aggressive interrogation practices conducted by the CIA qualified as EITs they were monitored by doctors and other medical personnel who guaranteed that the legal threshold for ‘severe physical and mental pain’ was not crossed (NY Times, 6 June 2010).