Many people are suspicious about being manipulated in their emotions, thoughts or behaviour by external influences, may those be drugs or advertising. However, it seems that – unbeknown to most of us – within our own bodies exist a considerable number of foreign entities. These entities can change our psychology to a surprisingly large degree. And they pursue their own interests – which do not necessarily coincide with ours.
Last week I attended a conference on the science of consciousness in Helsinki. While there, I attended a very interesting session on the Minimally Conscious State (MCS). This is a state that follows severe brain damage. Those diagnosed as MCS are thought to have some kind of conscious mental life, unlike those in Vegetative State. If that is right – so say many bioethicists and scientists – then the moral implications are profound. But what kind of conscious mental life is a minimally conscious mental life? What kind of evidence can we muster for an answer to this question? And what is the moral significance of whatever answer we favor? One takeaway from the session (for me, at least): it’s complicated.
Written By: Roy Gilbar, Netanya Academic College, Israel, and Charles Foster
In the recent case of ABC v St. George’s Healthcare NHS Trust and others,1 [http://www.bailii.org/ew/cases/EWHC/QB/2015/1394.html] a High Court judge decided that:
(a) where the defendants (referred to here jointly as ‘X’) knew that Y, a prisoner, was suffering from Huntingdon’s Disease (‘HD’); and
(b) X knew that Y had refused permission to tell Y’s daughter, Z (the claimant), that he had HD (and accordingly that there was a 50% chance that Z had it (and that if Z had it there was, correspondingly, a 50% chance that the fetus she was then carrying would have HD),
X had no duty to tell Z that Y was suffering from HD. Z said that if she had known of Y’s condition, she would have had an abortion. Continue reading
The Court of Protection is due to review very soon the case of a teenager with a relapsed brain tumour. The young man had been diagnosed with the tumour as a baby, but it has apparently come back and spread so that according to his neurosurgeon he has been “going in and out of a coma”. In February, the court heard from medical specialists that he was expected to die within two weeks, and authorized doctors to withhold chemotherapy, neurosurgery and other invasive treatments, against the wishes of the boy’s parents.
However, three months after that ruling, the teenager is still alive, and so the court has been asked to review its decision. What should we make of this case? Were doctors and the court wrong?
Let us suppose we have a treatment and we want to find out if it works. Call this treatment drug X. While we have observational data that it works—that is, patients say it works or, that it appears to work given certain tests—observational data can be misleading. As Edzard Ernst writes:
Whenever a patient or a group of patients receive a medical treatment and subsequently experience improvements, we automatically assume that the improvement was caused by the intervention. This logical fallacy can be very misleading […] Of course, it could be the treatment—but there are many other possibilities as well. Continue reading
In a recent issue of the Journal of Medical Ethics, Thomas Ploug and Søren Holm point out that scientific communities can sometimes get pretty polarized. This happens when two different groups of researchers consistently argue for (more or less) opposite positions on some hot-button empirical issue.
The examples they give are: debates over the merits of breast cancer screening and the advisability of prescribing statins to people at low risk of heart disease. Other examples come easily to mind. The one that pops into my head is the debate over the health benefits vs. risks of male circumcision—which I’ve covered in some detail here, here, here, here, and here.
When I first starting writing about this issue, I was pretty “polarized” myself. But I’ve tried to step back over the years to look for middle ground. Once you realize that your arguments are getting too one-sided, it’s hard to go on producing them without making some adjustments. At least, it is without losing credibility — and no small measure of self-respect.
This point will become important later on.
Nota bene! According to Ploug and Holm, disagreement is not the same as polarization. Instead, polarization only happens when researchers:
(1) Begin to self-identify as proponents of a particular position that needs to be strongly defended beyond what is supported by the data, and
(2) Begin to discount arguments and data that would normally be taken as important in a scientific debate.
But wait a minute. Isn’t there something peculiar about point number (1)?
On the one hand, it’s framed in terms of self-identification, so: “I see myself as a proponent of a particular position that needs to be strongly defended.” Ok, that much makes sense. But then it makes it sound like this position-defending has to go “beyond what is supported by the data.”
But who would self-identify as someone who makes inadequately supported arguments?
We might chalk this up to ambiguous phrasing. Maybe the authors mean that (in order for polarization to be diagnosed) researchers have to self-identify as “proponents of a particular position,” while the part about “beyond the data” is what an objective third-party would say about the researchers (even if that’s not what they would say about themselves). It’s hard to know for sure.
But the issue of self-identification is going to come up again in a minute, because I think it poses a big problem for Ploug and Holm’s ultimate proposal for how to combat polarization. To see why, though, I have to say a little bit more about what their overall suggestion is in the first place.
Since it was revealed that Andreas Lubitz—the co-pilot thought to be responsible for voluntarily crashing Germanwings Flight 9525 and killing 149 people—suffered from depression, a debate has ensued over whether privacy laws regarding medical records in Germany should be less strict when it comes to professions that carry special responsibilities.
The discussion that the scientists in Nature and Science called for should remain in realism, not go on to superhumans
Just over a week ago, prominent scientists in Nature and Science called for a ban for DNA modification in human embryos. This is because the scientists presume that now it actually would be possible to alter the genome in a human embryo in order to treat genetic diseases. Consequently, this would result in modified DNA in germ cells that would be inherited to future generations. The scientists wish to have a full ethical, legal, and public discussion before any germ-line modifications will be made. Furthermore, issues of safety are of importance.
The scientists’ statement is of utmost importance and hopefully this ethical, legal, and public discussion will emerge. However, the discussion on germ-line DNA modification is at danger if the debate will be taken to the level of science fictional superhumans, as already has happen. Not only can such discussion cause unnecessary public worry, it also leads the deliberation away from the actual and urgent questions.
Guest Post by Bill Gardner @Bill_Gardner
Many researchers and physicians assert that randomized clinical trials (RCTs) are the “gold standard” for evidence about what works in medicine. But many others have pointed to both strengths and limitations in RCTs (see, for example, Austin Frakt’s comments on Angus Deaton here). Nancy Cartwright is a major philosopher of science. In this Lancet paper she provides insights into why RCTs are so highly valued and also why they are by themselves insufficient to answer the most important questions in medicine.