medical ethics

Protecting Children or Policing Gender?

Laws on genital mutilation, gender affirmation and cosmetic genital surgery are at odds. The key criteria should be medical necessity and consent.

By Brian D. Earp (@briandavidearp)


In Ohio, USA, lawmakers are currently considering the Save Adolescents from Experimentation (SAFE) Act that would ban hormones or surgeries for minors who identify as transgender or non-binary. In April this year, Alabama passed similar legislation.

Alleging anti-trans prejudice, opponents of such legislation say these bans will stop trans youth from accessing necessary healthcare, citing guidance from the American Psychiatric Association, the American Medical Association and the American Academy of Pediatrics.

Providers of gender-affirming services point out that puberty-suppressing medications and hormone therapies are considered standard-of-care for trans adolescents who qualify. Neither is administered before puberty, with younger children receiving psychosocial support only. Meanwhile genital surgeries for gender affirmation are rarely performed before age 18.

Nevertheless, proponents of the new laws say they are needed to protect vulnerable minors from understudied medical risks and potentially lifelong bodily harms. Proponents note that irreversible mastectomies are increasingly performed before the age of legal majority.

Republican legislators in several states argue that if a child’s breasts or genitalia are ‘healthy’, there is no medical or ethical justification to use hormones or surgeries to alter those parts of the body.

However, while trans adolescents struggle to access voluntary services and rarely undergo genital surgeries prior to adulthood, non-trans-identifying children in the United States and elsewhere are routinely subjected to medically unnecessary surgeries affecting their healthy sexual anatomy — without opposition from conservative lawmakers.

Continue reading

Event Summary: Hope in Healthcare – a talk by Professor Steve Clarke

In a special lecture on 14 June 2022, Professor Steve Clarke presented work co-authored with Justin Oakley, ‘Hope in Healthcare’.

It is widely supposed that it is important to imbue patients undergoing medical procedures with a sense of hope. But why is hope so important in healthcare, if indeed it is? We examine the answers that are currently on offer and show that none do enough to properly explain the importance that is often attributed to hope in healthcare. We then identify a hitherto unrecognised reason for supposing that it is important to imbue patients undergoing significant medical procedures with hope, which draws on prospect theory, Kahneman and Tversky’s hugely influential descriptive theory about decision making in situations of risk and uncertainty. We also consider some concerns about patient consent and the potential manipulation of patients, that are raised by our account. We then consider some complications for the account raised by religious sources of hope, which are commonly drawn on by patients undergoing major healthcare procedures.

Bio: Steve Clarke is a Professor in the Centre for Applied Philosophy and Public Ethics, Charles Sturt University, and a Senior Research Associate in the Uehiro Centre for Practical Ethics at the University of Oxford.

This lecture was jointly organised between the Wellcome Centre for Ethics and Humanities and Oxford Uehiro Centre for Practical Ethics.

Recordings available at

Oxford Podcasts


Returning To Personhood: On The Ethical Significance Of Paradoxical Lucidity In Late-Stage Dementia

By David M Lyreskog

About Dementia

Dementia is a class of medical conditions which typically impair our cognitive abilities and significantly alter our emotional and personal lives. The absolute majority of dementia cases – approximately 70% – are caused by Alzheimer’s disease. Other causes include cardiovascular conditions, Lewy body disease, and Parkinson’s disease. In the UK alone, it is estimated that over 1 million people are currently living with dementia, and that care costs amount to approximately £38 billion a year. Globally, it is estimated that over 55 million people live with dementia in some form, with an expected 10 million increase per year, and the cost of care exceeds £1 trillion. As such, dementia is widely regarded as one of the main medical challenges of our time, along with cancer, and infectious diseases. As a response to this, large amounts of money have been put towards finding solutions over decades. The UK government alone spends over £75 million per year on the search for improved diagnostics, effective treatments, and cures. Yet, dementia remains a terrible enigma, and continues to elude our grasp.

Continue reading

Three Observations about Justifying AI

Written by:  Anantharaman Muralidharan, G Owen Schaefer, Julian Savulescu
Cross-posted with the Journal of Medical Ethics blog

Consider the following kind of medical AI. It consists of 2 parts. The first part consists of a core deep machine learning algorithm. These blackbox algorithms may be more accurate than human judgment or interpretable algorithms, but are notoriously opaque in terms of telling us on what basis the decision was made. The second part consists of an algorithm that generates a post-hoc medical justification for the core algorithm. Algorithms like this are already available for visual classification. When the primary algorithm identifies a given bird as a Western Grebe, the secondary algorithm provides a justification for this decision: “because the bird has a long white neck, pointy yellow beak and red eyes”. The justification goes beyond just a description of the provided image or a definition of the bird in question, and is able to provide a justification that links the information provided in the image to the features that distinguish the bird. The justification is also sufficiently fine grained as to account for why the bird in the picture is not a similar bird like the Laysan Albatross. It is not hard to imagine that such an algorithm would soon be available for medical decisions if not already so. Let us call this type of AI “justifying AI” to distinguish it from algorithms which try, to some degree or other, to wear their inner workings on their sleeves.

Possibly, it might turn out that the medical justification given by the justifying AI sounds like pure nonsense. Rich Caruana et al present a  case whereby asthmatics were deemed less at risk of dying by pneumonia. As a result, it prescribed less aggressive treatments for asthmatics who contracted pneumonia. The key mistake the primary algorithm made was that it failed to account for the fact that asthmatics who contracted pneumonia had better outcomes only because they tended to receive more aggressive treatment in the first place. Even though the algorithm was more accurate on average, it was systematically mistaken about one subgroup. When incidents like these occur, one option here is to disregard the primary AI’s recommendation. The rationale here is that we could hope to do better than by relying on the blackbox alone by intervening in cases where the blackbox gives an implausible recommendation/prediction. The aim of having justifying AI is to make it easier to identify when the primary AI is misfiring. After all, we can expect trained physicians to recognise a good medical justification when they see one and likewise recognise bad justifications. The thought here is that the secondary algorithm generating a bad justification is good evidence that the primary AI has misfired.

The worry here is that our existing medical knowledge is notoriously incomplete in places. It is to be expected that there will be cases where the optimal decision vis a vis patient welfare does not have a plausible medical justification at least based on our current medical knowledge. For instance, Lithium is used as a mood stabilizer but the reason why this works is poorly understood. This means that ignoring the blackbox whenever a plausible justification in terms of our current medical knowledge is unavailable will tend to lead to less optimal decisions. Below are three observations that we might make about this type of justifying AI.

Continue reading

Cross Post: Is This the End of the Road for Vaccine Mandates in Healthcare?

Written by Dominic Wilkinson, Alberto Giubilini, and Julian Savulescu

The UK government recently announced a dramatic U-turn on the COVID vaccine mandate for healthcare workers, originally scheduled to take effect on April 1 2022. Health or social care staff will no longer need to provide proof of vaccination to stay employed. The reason, as health secretary Sajid Javid made clear, is that “it is no longer proportionate”.

There are several reasons why it was the right decision at this point to scrap the mandate. Most notably, omicron causes less severe disease than other coronavirus variants; many healthcare workers have already had the virus (potentially giving them immunity equivalent to the vaccine); vaccines are not as effective at preventing re-infection and transmission of omicron; and less restrictive alternatives are available (such as personal protective equipment and lateral flow testing of staff). Continue reading

Cross Post: Vaccine Mandates For Healthcare Workers Should Be Scrapped – Omicron Has Changed The Game

Written by Dominic Wilkinson, Jonathan Pugh and Julian Savulescu

Time is running out for National Health Service staff in England who have not had a COVID vaccine. Doctors and nurses have until Thursday, February 3, to have their first jab. If they don’t, they will not be fully immunised by the beginning of April and could be dismissed.

But there are reports this week that the UK government is debating whether to postpone the COVID vaccine mandate for healthcare staff. Would that be the right thing to do?

Vaccine requirements are controversial and have led to worldwide protests. Those in favour have argued that it is necessary and proportionate to protect vulnerable patients by making vaccination a condition of employment for healthcare staff. But critics have argued that vaccine mandates amount to a violation of human rights. Continue reading

Cross Post: Pig’s Heart Transplant: Was David Bennett the Right Person to Receive Groundbreaking Surgery?

Dominic Wilkinson, University of Oxford

The recent world-first heart transplant from a genetically modified pig to a human generated both headlines and ethical questions.

Many of those questions related to the ethics of xenotransplantation. This is the technical term for organ transplants between species. There has been research into this for more than a century, but recent scientific developments involving genetic modifications of animals to stop the organ being rejected appear to make this much more feasible.

Typical questions about xenotransplantation relate to the risks (for example, of transmitting infection), treatment of the animals, and the ethics of genetic modification of animals for this purpose. Continue reading

Event Summary: Vaccine Policies and Challenge Trials: The Ethics of Relative Risk in Public Health

St Cross Special Ethics Seminar, Presented by Dr Sarah Chan, 18 November 2021

In this St Cross Special Ethics Seminar, Dr Sarah Chan explores three key areas of risk in ‘challenge trials’ – the deliberate infection of human participants to infectious agents as a tool for vaccine development and improving our knowledge of disease biology.  Dr Chan explores a) whether some forms of challenge trials cannot be ethically justified; b) why stratifying populations for vaccine allocation by risk profile can result in unjust risk distribution; and c) how comparing these cases and the evaluation of relative risk reveals flaws in approach to pandemic public health.

Continue reading

Compromising On the Right Not to Know?

Written by Ben Davies

Personal autonomy is the guiding light of contemporary clinical and research practice, at least in the UK. Whether someone is a potential participant in a research trial, or a patient being treated by a medical professional, the gold standard, violated only in extremis, is that they should decide for themselves whether to go ahead with a particular intervention, on the basis of as much relevant information as possible.

Roger Crisp recently discussed Professor Gopal Sreenivasan’s New Cross seminar, which argued against a requirement for informational disclosure in consenting to research participation. Sreenivasan’s argument was, at least in its first part, based on a straightforward appeal to autonomy: if autonomy is what matters most, I should have the right to autonomously refuse information.

I have previously outlined a related argument in a clinical context, in which I sought to undermine arguments against a putative ‘Right Not to Know’ that are themselves based in autonomy. In brief, my argument is, firstly, that a decision can itself be autonomous without promoting the agent’s future or overall autonomy and, second, that even if there is an autonomy-based moral duty to hear relevant information (as scholars such as Rosamond Rhodes argue), we can still have a right that people not force us to hear such information.

In a recent paper, Julian Savulescu and I go further into the details of the Right Not to Know, setting out the scope for a degree of compromise between the two central camps.

Continue reading

Special St Cross Seminar summary of Maureen Kelley’s: Fighting Diseases of Poverty Through Research: Deadly dilemmas, moral distress and misplaced responsibilities

Written By Tess Johnson

You can find the video recording of Maureen Kelley’s seminar here, and the podcast here.

Lately, we have heard much in the media about disease transmission in conditions of poverty, given the crisis-point COVID-19 spread and mortality that India is experiencing. Yet, much of the conversation is centred on the ‘proximal’—or more direct—causes of morbidity and mortality, rather than the ‘structural determinants’—or underlying, systemic conditions that lead to disease vulnerability in a population. As a result, much global health research is focussed on infectious disease treatment and prevention, rather than responses to the complex political, economic and social needs that underly disease in vulnerable communities. This can result not only in less efficient and effective research, but also moral distress for researchers, and a disconnect between research goals and the responsibility that researchers feel for addressing a community’s immediate needs.

In her Special St Cross Seminar last week, Maureen Kelley introduced her audience to these problems in global health research. Professor Kelley outlined, first, empirical findings evidencing this problem, a result of research she recently performed with the Ethox Centre’s REACH team, in collaboration with global health research teams around the world. Second, she linked this empirical work to theory on moral distress and researchers’ and institutions’ responsibilities toward participating communities in low and middle-income countries (LMICs). Continue reading