Research Ethics

Spin city: why improving collective epistemology matters

The gene for internet addiction has been found! Well, actually it turns out that 27% of internet addicts have the genetic variant, compared to 17% of non-addicts. The Encode project has overturned the theory of ‘junk DNA‘! Well, actually we already knew that that DNA was doing things long before, and the definition of ‘function’ used is iffy. Alzheimer’s disease is a new ‘type 3 diabetes‘! Except that no diabetes researchers believe it. Sensationalist reporting of science is everywhere, distorting public understanding of what science has discovered and its relative importance. If media ought to try to give a full picture of the situation, they seem to be failing.

But before we start blaming science journalists, maybe we should look sharply at the scientists. A new study shows that 47% of press releases about controlled trials contained spin, emphasizing the beneficial effect of the experimental treatment. This carried over to subsequent news stories, often copying the original spin. Maybe we could try blaming university press officers, but the study found spin in 41% of the abstracts of the papers too, typically overestimating the benefit of the intervention or downplaying risks. The only way of actually finding out the real story is to read the content of the paper, something requiring a bit of skill – and quite often paying for access.

Who to blame, and what to do about it?

Continue reading

H5N1: Why Open the Stable Door?

Professor Paul Keim, who chairs the US National Science Advisory Board for Biosecurity, recently recommended the censoring of research that described the mutations which led to the transformation of the H5N1 bird-flu virus into a form that can be transmitted between humans through droplets in breath (in ferrets, the number of mutations required is frighteningly small – five). His reason is simple: the research would be a recipe book for bioterrorists.

Keim thinks, however, that such censorship will only delay the inevitable. The information will come out sooner or later, but at least governments might by then have developed and prepared sufficient stocks of vaccine and set in place other emergency measures to deal with a global pandemic.

This is not quite closing the stable door after the horse is bolted. It’s more like closing the farm gate, in the knowledge that eventually the horse will jump the gate and escape.

But this raises the question of why the stable door wasn’t bolted in the first place. In an article in Nature, the leader of one of the teams has said that the research was necessary to show that those experts who doubt the human transmissibility of H5N1 are wrong. But given that there is controversy here, governments should of course be doing what they have been doing: treating the possibility as a serious risk. In response to the charge that the research is dangerous, this same research leader’s response is that there is already a threat of mutation in nature. But threats don’t cancel one another, and nature is not revealing its secrets to bioterrorists. The researchers claim that their research was necessary for the development of a vaccine. Keim’s view is that this is quite implausible, since the drugs the scientists were using against their virus were the same ones used against others. If he’s right, a natural conclusion to draw is that the scientists should never have done the research in the first place. And, having done it, they should have kept quiet about its details and destroyed the virus. They might indeed have informed the media of their overall result, or some carefully restricted set of other researchers of the details of their research. But then of course they wouldn’t have been able to publish those details in top scientific journals.

Experimenting with oversight with more bite?

It was probably hard for the US National Science Advisory Board for Biosecurity (NSABB) to avoid getting plenty of coal in its Christmas stockings this year, sent from various parties who felt NSABB were either stifling academic freedom or not doing enough to protect humanity. So much for good intentions.

The background is the potentially risky experiments on demonstrating the pandemic potential of bird flu: NSABB urged that the resulting papers not include “the methodological and other details that could enable replication of the experiments by those who would seek to do harm”. But it can merely advice, and is fairly rarely called upon to review potentially risky papers. Do we need something with more teeth, or will free and open research protect us better?

Continue reading

Ferretting out fearsome flu: should we make pandemic bird flu viruses?

Scientists have made a new strain of bird flu that most likely could spread between humans, triggering a pandemic if it were released. A misguided project, or a good idea? How should we handle dual use research where merely knowing something can be risky, yet this information can be relevant for reducing other risks?

Continue reading

When it’s unethical to be a well-published academic

by Charles Foster

There’s a huge number of journals publishing papers about ethics. Would the world be poorer, less ethically well adjusted, or less wise, if half of them went out of business? I doubt it. Quite the opposite, in fact. Less, famously, is more. Let’s face it: there’s little or nothing that’s new in most of the papers we write. We write them because we feel that we should; because our ‘career’ or our self-esteem demands it, or, more likely, because the department needs to put in a long list of publications in order to justify its existence.  The fact of a publication is more important than its quality.

In order to justify the recycling of old thoughts, and to convince ourselves and our readers that we’re really smart, we write our papers in impenetrable jargon. Whole papers are devoted to saying in new technical language what was simply and accessibly said in words of one syllable in the 1930s. Academic enterprise has become a process of obfuscation. Continue reading

Autonomy: amorphous or just impossible?

By Charles Foster

I have just finished writing a book about dignity in bioethics. Much of it was a defence against the allegation that dignity is hopelessly amorphous; feel-good philosophical window-dressing; the name we give to whatever principle gives us the answer to a bioethical conundrum that we think is right.

This allegation usually comes from the thoroughgoing autonomists – people who think that autonomy is the only principle we need. There aren’t many of them in academic ethics, but there are lots of them in the ranks of the professional guideline drafters, (look, for instance, at the GMC’s guidelines on consenting patients) and so they have an unhealthy influence on the zeitgeist.

The allegation is ironic. The idea of autonomy is hardly less amorphous. To give it any sort of backbone you have to adopt an icy, unattractive, Millian, absolutist version of autonomy. I suspect that the widespread adoption of this account is a consequence not of a reasoned conviction that this version is correct, but of a need, rooted in cognitive dissonance, to maintain faith with the fundamentalist notions that there is a single principle in bioethics, and that that principle must keep us safe from the well-documented evils of paternalism. Autonomy-worship is primarily a reaction against paternalism. Reaction is not a good way to philosophise. Continue reading

You want to publish? Let’s hear all your dirty secrets

By Charles Foster

Most scientific journals require contributors to declare any conflict of interest.

But what about ethicists? We are much more ambitious and presumptuous in our aims than most scientists. We purport to tell our readers not which drug will reduce their blood cholesterol, or which type of plate is best for their radial fracture, but how best to live: how to make right decisions about things that matter far more than cholesterol; how to be the right sort of people. If we write good papers, amounting to more than newspaper opinion pieces, the papers support their conclusions with supposedly objective reasoning. We try to look scientific. And yet, try as we might, we can’t escape from our own histories and tendencies. If an ethicist has been sexually abused as a boy by a paedophilic priest, or forced to watch US evangelical TV, he’ll never be able to think that religion is anything but evil or ridiculous, and his articles will argue, with apparent but wholly fake objectivity, towards that conclusion. If the Jesuits got him before the age of 7, and etched the catechism into his subconscious rather than buggering him, the man they made out of the boy will be theirs for ever, in the Journal of Medical Ethics just as devoutly as in the confessional. And yet there’ll be not a whisper of a warning next to their papers. Those influences are likely to be far more determinative of the views expressed than any financial conflict of interest in a drug trial ever was. Everything about an ethicist’s life raises a potential conflict of interest. Continue reading

Beauty, brains, and the halo effect

by Alexandre Erler

Satoshi Kanazawa is currently in the news – see e.g. these articles in the Daily Mail, The Australian and Psychology Today. An evolutionary psychologist at the London School of Economics, Kanazawa has just published a new article in the journal Intelligence (Kanazawa 2011) in which he argues, in continuity with his previous research, that beautiful people tend to be more intelligent than plainer ones (especially if they are men). Only now he is arguing that this correlation may be much stronger than we previously thought. His conclusion is based on data from two studies, conducted respectively in the UK and the US, which tested the intelligence of children and young teenagers but also rated their level of physical attractiveness. In the British study, attractive respondents had a mean IQ about 13 points higher than unattractive ones, and the beauty-intelligence correlation turned out to be of a similar magnitude to that between intelligence and education.

Continue reading

Predictors of Alzheimer’s vs. the Hammer of Witches

Matthew L Baum

Round 1: Baltimore
I first heard of the Malleus Maleficarum, or The Hammer of Witches, last year when I visited Johns Hopkins Medical School in Baltimore, MD, USA. A doctor for whom I have great respect introduced me to the dark leather-bound tome, which he pulled off of his bookshelf. Apparently, this aptly named book was used back in the day (published mid-1400s) by witch-hunters as a diagnostic manual of sorts to identify witches. Because all the witch-hunters used the same criteria as outlined in The Hammer to tell who was a witch, they all –more or less- identified the same people as witches. Consequently the cities, towns, and villages all enjoyed a time of highly precise witch wrangling. This was fine and good until people realized that there was a staggering problem with the validity of these diagnoses. Textbook examples (or Hammer-book examples) these unfortunates may have been, but veritable wielders of the dark arts they were not. The markers of witchcraft these hunters agreed upon, though precise and reliable, simply were not valid.
Continue reading

Incidentally… avoiding the problem of incidental findings

A new study from the Mayo clinic in the United States points to a frequent problem in certain types of medical research. When healthy volunteers or patients with a given condition take part in research studies they may have brain scans, CAT scans, blood tests or genetic tests that they wouldn’t otherwise have had. These tests are not done for the benefit of the individual, they are designed to answer a research question. But sometimes, quite often according to the authors of this new study, researchers may spot something on the scan that shouldn’t be there, and that could indicate a previously undiagnosed health condition. These ‘incidental findings’ generate an ethical dilemma for researchers. Should they tell the research participant about the shadow seen on their scan? Do they have an obligation to reveal to a research participant that they have found them to carry a gene increasing their risk for breast cancer, or Alzheimer’s disease? There is much agonising by ethics committees, ethicists and researchers about the problem of incidental findings, but there is a simple way of avoiding the problem. Anonymise research databases and tests so that there is no possibility of determining which participant has the breast cancer gene, or the lump in their kidney.

Continue reading

Authors

Affiliations