Skip to content

Clinical Ethics

PRESS RELEASE: Oxford-led Study Calls for End to “Medically Unnecessary” Intersex Surgeries

New International Consensus Calls for Healthcare Providers to Stop Performing Medically Unnecessary Genital Surgeries in Prepubertal Children and Infants, Regardless of Sex or Gender

Read More »PRESS RELEASE: Oxford-led Study Calls for End to “Medically Unnecessary” Intersex Surgeries

Expertise and Autonomy in Medical Decision Making

Written by Rebecca Brown.

This is the fourth in a series of blogposts by the members of the Expanding Autonomy project, funded by the Arts and Humanities Research Council.

This blog is based on a paper forthcoming in Episteme. The full text is available here.

Imagine you are sick with severe headaches, dizziness and a nasty cough. You go to see a doctor. She tells you you have a disease called maladitis and it is treatable with a drug called anti-mal. If you take anti-mal every day for a week the symptoms of maladitis should resolve completely. If you don’t treat the maladitis, you will continue to experience your symptoms for a number of weeks, though it should resolve eventually. In a small number of cases, maladitis can become chronic. She also tells you about some side-effects of anti-mal: it can cause nausea, fatigue and an itchy rash. But since these are generally mild and temporary, your doctor suggests that they are worth risking in order to treat your maladitis. You have no medical training and have never heard of maladitis or anti-mal before. What should you do?

One option is that you a) form the belief that you have maladitis and b) take the anti-mal to treat it. Your doctor, after all, has relevant training and expertise in this area, and she believes that you have maladitis and should take anti-mal.Read More »Expertise and Autonomy in Medical Decision Making

Abortion in Wonderland

By Charles Foster

 

 

Image: Heidi Crowter: Copyright Don’t Screen Us Out

Scene: A pub in central London

John: They did something worthwhile there today, for once, didn’t they? [He motions towards the Houses of Parliament]

Jane: What was that?

John: Didn’t you hear? They’ve passed a law saying that a woman can abort a child up to term if the child turns out to have red hair.

Jane: But I’ve got red hair!

John: So what? The law is about the fetus. It has nothing whatever to do with people who are actually born.

Jane: Eh?

That’s the gist of the Court of Appeal’s recent decision in the case of Aidan Lea-Wilson and Heidi Crowter (now married and known as Heidi Carter). Read More »Abortion in Wonderland

The Homeric Power of Advance Directives

By Charles Foster

[Image: Ulysses and the Sirens: John William Waterhouse, 1891: National Gallery of Victoria, Melbourne]

We shouldn’t underestimate Homer’s hold on us. Whether or not we’ve ever read him, he created many of our ruling memes.

I don’t think it’s fanciful (though it might be ambitious) to suggest that he, and the whole heroic ethos, are partly responsible for our uncritical adoption of a model of autonomy which doesn’t do justice to the sort of creatures we really are. That’s a big claim. I can’t justify it here. But one manifestation of that adoption is our exaggerated respect for advance directives – declarations made when one is capacitous about how one would like to be treated if incapacitous, and which are binding if incapacity supervenes if (in English law) the declaration is ‘valid and applicable.’ 1.

I suspect that some of this respect comes from the earliest and most colourful advance directive story ever: Odysseus and the Sirens.Read More »The Homeric Power of Advance Directives

Three Observations about Justifying AI

Written by:  Anantharaman Muralidharan, G Owen Schaefer, Julian Savulescu
Cross-posted with the Journal of Medical Ethics blog

Consider the following kind of medical AI. It consists of 2 parts. The first part consists of a core deep machine learning algorithm. These blackbox algorithms may be more accurate than human judgment or interpretable algorithms, but are notoriously opaque in terms of telling us on what basis the decision was made. The second part consists of an algorithm that generates a post-hoc medical justification for the core algorithm. Algorithms like this are already available for visual classification. When the primary algorithm identifies a given bird as a Western Grebe, the secondary algorithm provides a justification for this decision: “because the bird has a long white neck, pointy yellow beak and red eyes”. The justification goes beyond just a description of the provided image or a definition of the bird in question, and is able to provide a justification that links the information provided in the image to the features that distinguish the bird. The justification is also sufficiently fine grained as to account for why the bird in the picture is not a similar bird like the Laysan Albatross. It is not hard to imagine that such an algorithm would soon be available for medical decisions if not already so. Let us call this type of AI “justifying AI” to distinguish it from algorithms which try, to some degree or other, to wear their inner workings on their sleeves.

Possibly, it might turn out that the medical justification given by the justifying AI sounds like pure nonsense. Rich Caruana et al present a  case whereby asthmatics were deemed less at risk of dying by pneumonia. As a result, it prescribed less aggressive treatments for asthmatics who contracted pneumonia. The key mistake the primary algorithm made was that it failed to account for the fact that asthmatics who contracted pneumonia had better outcomes only because they tended to receive more aggressive treatment in the first place. Even though the algorithm was more accurate on average, it was systematically mistaken about one subgroup. When incidents like these occur, one option here is to disregard the primary AI’s recommendation. The rationale here is that we could hope to do better than by relying on the blackbox alone by intervening in cases where the blackbox gives an implausible recommendation/prediction. The aim of having justifying AI is to make it easier to identify when the primary AI is misfiring. After all, we can expect trained physicians to recognise a good medical justification when they see one and likewise recognise bad justifications. The thought here is that the secondary algorithm generating a bad justification is good evidence that the primary AI has misfired.

The worry here is that our existing medical knowledge is notoriously incomplete in places. It is to be expected that there will be cases where the optimal decision vis a vis patient welfare does not have a plausible medical justification at least based on our current medical knowledge. For instance, Lithium is used as a mood stabilizer but the reason why this works is poorly understood. This means that ignoring the blackbox whenever a plausible justification in terms of our current medical knowledge is unavailable will tend to lead to less optimal decisions. Below are three observations that we might make about this type of justifying AI.

Read More »Three Observations about Justifying AI

Cognitive snobbery: The Unacceptable Bias in Favour of the Conscious

There are many corrosive forms of discrimination. But one of the most dangerous is the bias in favour of consciousness, and the consequent denigration of the unconscious.

We see it everywhere. It’s not surprising. For when we’re unreflective – which is most of the time – we tend to suppose that we are our conscious selves, and that the unconscious is a lower, cruder part of us; a seething atavistic sea full of monsters, from which we have mercifully crawled, making our way ultimately to the sunlit uplands of the neocortex, there to gaze gratefully and dismissively back at what we once were.  It’s a picture encoded in our self-congratulatory language: ‘Higher cognitive function’; ‘She’s not to be blamed: she wasn’t fully conscious of the consequences.’: ‘In the Enlightenment we struck off the shackles of superstition and freed our minds to roam.’Read More »Cognitive snobbery: The Unacceptable Bias in Favour of the Conscious

Philosophical Fiddling While the World Burns

By Charles Foster

An unprecedented editorial has just appeared in many health journals across the world. It relates to climate change.

The authors say that they are ‘united in recognising that only fundamental and equitable changes to societies will reverse our current trajectory.’

Climate change, they agree, is the major threat to public health. Here is an excerpt: there will be nothing surprising here:

‘The risks to health of increases above 1.5°C are now well established. Indeed, no temperature rise is “safe.” In the past 20 years, heat related mortality among people aged over 65 has increased by more than 50%.Hi gher temperatures have brought increased dehydration and renal function loss, dermatological malignancies, tropical infections, adverse mental health outcomes, pregnancy complications, allergies, and cardiovascular and pulmonary morbidity and mortality. Harms disproportionately affect the most vulnerable, including children, older populations, ethnic minorities, poorer communities, and those with underlying health problems.’Read More »Philosophical Fiddling While the World Burns

Is a Publication Boycott of Chinese Science a Justifiable Response to Human Rights Violations Perpetrated by Chinese Doctors and Scientists?

By Doug McConnell

Recently the editor-in-chief of the Annals of Human Genetics, Prof David Curtis, resigned from his position, in part, because the journal’s publisher, Wiley, refused to publish a letter he co-authored with Thomas Schulze, Yves Moreau, and Thomas Wenzel. In that letter, they argue in favour of a boycott on Chinese medical and scientific publications as a response to the serious human rights violations happening in China. Several other leading journals, the Lancet, the BMJ and JAMA have also refused to publish the letter claiming that a boycott against China would be unfair and counterproductive.

This raises two separate ethical issues: 1. Should journals refuse to publish a letter arguing in favour of a boycott on Chinese medical and scientific publications? 2. Should journals actually establish a boycott on Chinese medical and scientific publications?Read More »Is a Publication Boycott of Chinese Science a Justifiable Response to Human Rights Violations Perpetrated by Chinese Doctors and Scientists?

‘Waiver or Understanding? A Dilemma for Autonomists about Informed Consent’

by Roger Crisp

At a recent New St Cross Ethics seminar, Gopal Sreenivasan, Crown University Distinguished Professor in Ethics at Duke University and currently visitor at Corpus Christi College and the Oxford Uehiro Centre, gave a fascinating lecture on whether valid informed consent requires that the consenter have understood the relevant information about what they are being asked to consent to. Gopal argued that it doesn’t.Read More »‘Waiver or Understanding? A Dilemma for Autonomists about Informed Consent’