by Dominic Wilkinson @Neonatalethics
A critically ill infant in intensive care (let us call him Jonas) has serious congenital abnormalities affecting his liver and brain.1 Doctors looking after Jonas suspect that he may have a major genetic problem. They have recommended testing for Jonas, to help determine whether he does have this problem.
However, Jonas’ parents have refused consent for the genetic test. They are concerned that the test could be used to discriminate against Jonas and against them; they have repeatedly indicated that they will not agree to it being performed.
Could it ever be ethical to perform genetic testing on a child against parental wishes?
Brenda Kelly and Charles Foster
Female Genital Mutilation (‘FGM’) is a term covering various procedures involving partial or total removal of the external female genitalia or other injury to the female genital organs for non-medical reasons (WHO, 2012). It can be associated with immediate and long-term physical and psychological health problems. FGM is prevalent in Africa, Middle East and South East Asia as well as within diaspora communities from these countries
The Government, keenly aware of the political capital in FGM, has come down hard. The Serious Crime Act 2015 makes it mandatory to report to the police cases of FGM in girls under the age of 18. While we have some issues with that requirement, it is at least concordant with the general law of child protection.
What is of more concern is the requirement, introduced by the cowardly device of a Ministerial Direction and after the most cursory consultation (in which the GMC and the RCOG hardly covered themselves in glory), by which healthcare professionals, from October 2015, are legally obliged to submit patient-identifiable information to the Department of Health (‘DOH’) on every female patient with FGM who presents for whatever reason, through the Enhanced Dataset Collection (EDC). The majority of these women will have undergone FGM in their country of origin prior to coming to the UK. Continue reading
Many people are suspicious about being manipulated in their emotions, thoughts or behaviour by external influences, may those be drugs or advertising. However, it seems that – unbeknown to most of us – within our own bodies exist a considerable number of foreign entities. These entities can change our psychology to a surprisingly large degree. And they pursue their own interests – which do not necessarily coincide with ours.
Last week I attended a conference on the science of consciousness in Helsinki. While there, I attended a very interesting session on the Minimally Conscious State (MCS). This is a state that follows severe brain damage. Those diagnosed as MCS are thought to have some kind of conscious mental life, unlike those in Vegetative State. If that is right – so say many bioethicists and scientists – then the moral implications are profound. But what kind of conscious mental life is a minimally conscious mental life? What kind of evidence can we muster for an answer to this question? And what is the moral significance of whatever answer we favor? One takeaway from the session (for me, at least): it’s complicated.
Written By: Roy Gilbar, Netanya Academic College, Israel, and Charles Foster
In the recent case of ABC v St. George’s Healthcare NHS Trust and others,1 [http://www.bailii.org/ew/cases/EWHC/QB/2015/1394.html] a High Court judge decided that:
(a) where the defendants (referred to here jointly as ‘X’) knew that Y, a prisoner, was suffering from Huntingdon’s Disease (‘HD’); and
(b) X knew that Y had refused permission to tell Y’s daughter, Z (the claimant), that he had HD (and accordingly that there was a 50% chance that Z had it (and that if Z had it there was, correspondingly, a 50% chance that the fetus she was then carrying would have HD),
X had no duty to tell Z that Y was suffering from HD. Z said that if she had known of Y’s condition, she would have had an abortion. Continue reading
The Court of Protection is due to review very soon the case of a teenager with a relapsed brain tumour. The young man had been diagnosed with the tumour as a baby, but it has apparently come back and spread so that according to his neurosurgeon he has been “going in and out of a coma”. In February, the court heard from medical specialists that he was expected to die within two weeks, and authorized doctors to withhold chemotherapy, neurosurgery and other invasive treatments, against the wishes of the boy’s parents.
However, three months after that ruling, the teenager is still alive, and so the court has been asked to review its decision. What should we make of this case? Were doctors and the court wrong?
Let us suppose we have a treatment and we want to find out if it works. Call this treatment drug X. While we have observational data that it works—that is, patients say it works or, that it appears to work given certain tests—observational data can be misleading. As Edzard Ernst writes:
Whenever a patient or a group of patients receive a medical treatment and subsequently experience improvements, we automatically assume that the improvement was caused by the intervention. This logical fallacy can be very misleading […] Of course, it could be the treatment—but there are many other possibilities as well. Continue reading
In a recent issue of the Journal of Medical Ethics, Thomas Ploug and Søren Holm point out that scientific communities can sometimes get pretty polarized. This happens when two different groups of researchers consistently argue for (more or less) opposite positions on some hot-button empirical issue.
The examples they give are: debates over the merits of breast cancer screening and the advisability of prescribing statins to people at low risk of heart disease. Other examples come easily to mind. The one that pops into my head is the debate over the health benefits vs. risks of male circumcision—which I’ve covered in some detail here, here, here, here, and here.
When I first starting writing about this issue, I was pretty “polarized” myself. But I’ve tried to step back over the years to look for middle ground. Once you realize that your arguments are getting too one-sided, it’s hard to go on producing them without making some adjustments. At least, it is without losing credibility — and no small measure of self-respect.
This point will become important later on.
Nota bene! According to Ploug and Holm, disagreement is not the same as polarization. Instead, polarization only happens when researchers:
(1) Begin to self-identify as proponents of a particular position that needs to be strongly defended beyond what is supported by the data, and
(2) Begin to discount arguments and data that would normally be taken as important in a scientific debate.
But wait a minute. Isn’t there something peculiar about point number (1)?
On the one hand, it’s framed in terms of self-identification, so: “I see myself as a proponent of a particular position that needs to be strongly defended.” Ok, that much makes sense. But then it makes it sound like this position-defending has to go “beyond what is supported by the data.”
But who would self-identify as someone who makes inadequately supported arguments?
We might chalk this up to ambiguous phrasing. Maybe the authors mean that (in order for polarization to be diagnosed) researchers have to self-identify as “proponents of a particular position,” while the part about “beyond the data” is what an objective third-party would say about the researchers (even if that’s not what they would say about themselves). It’s hard to know for sure.
But the issue of self-identification is going to come up again in a minute, because I think it poses a big problem for Ploug and Holm’s ultimate proposal for how to combat polarization. To see why, though, I have to say a little bit more about what their overall suggestion is in the first place.