Mandatory Morality: When Should Moral Enhancement Be Mandatory?

By Julian Savulescu

Together with Tom Douglas and Ingmar Persson, I launched the field of moral bioenhancement. I have often been asked ‘When should moral bioenhancement be mandatory?’ I have often been told that it won’t be effective if it is not mandatory.

I have defended the possibility that it could be mandatory. In that paper with Ingmar Persson, I discussed the conditions under which mandatory moral bioenhancement that removed “the freedom to fall” might be justified: a grave threat to humanity (existential threat) with a very circumscribed limitation of freedom (namely the freedom to kill large numbers of innocent people), but with freedom retained in all other spheres. That is, large benefit for a small cost.

Elsewhere I have described this as an “easy rescue”, and have argued that some level of coercion can be used to enforce a duty of easy rescue in both individual and collective action problems.

The following algorithm captures these features, making explicit the relevant factors:

Algorithm for Moral Bioenhancement
[modified from JME 2020]

Note that this applies to all moral enhancement, not only moral bioenhancement. It applies to any intervention that exacts a cost on individuals for the benefit of others. That is, when can the autonomy or well-being of one is compromised for the autonomy or well-being of others. This algorithm creates a decision procedure to answer that question.

Indeed, this algorithm was developed to answer the question, in the COVID pandemic: when should vaccination be mandatory? 

But it applies to any social co-ordination problem when risks or harms must be imposed to achieve a social goal. Examples include public health meaures (e.g quarantine or vaccination), environmental policies (e.g. carbon taxes), taxation policies more generally, and others.

Can large harms (including, in extreme cases, death) ever be imposed on individuals to secure extremely large collective benefits (such as continued existence of humanity)? According to utilitarianism and many forms of consequentialism, it can be justified. But we need not answer this question to consider the moral justification for imposing small risks or harms for large collective benefits. We should all agree, whatever our religious or personal philosophical perspectives, that small risks or harms should be imposed for large social benefits (this is the second account below (absolute threshold)). After all, that is what justifies mandatory seat belt laws, speed limits and taxation. And it could justify mandatory moral enhancement, such as moral education, and moral bioenhancement, should that ever be possible, if the risks or harms were equivalent and the benefits as great.

Proportionality: Relative or Absolute?

One way to think about “easy rescue” is whether the proportionality of sacrifice to benefit should be relative or absolute? In a previous paper with Alberto Giubilini, Tom Douglas and Hannah Maslen, I discussed relative thresholds vs absolute thresholds. Peter Singer holds a relative threshold view which stipulate that large individual costs are justified when the benefits for others is proportionately larger. On a threshold account, there is un upper limit to the magnitude of the cost you can impose on individuals for the collective benefit, even if beyond that threshold the cost would be proportionate to the benefit. For example, on the relative account it would be permissible to impose death on an individual to save significantly more, because it is proportionate. Or extreme effective altruists might argue that you should give, say, 70% of your income to save people in a poverty-stricken country. On the absolute threshold account, the individual cost is not justified if it is above a certain threshold (so, for example, we could set the threshold much lower than the famous “kill one to save many” examples, even if it is relatively proportionate, because death is too large a cost for an individual).

Thanks to Alberto Giubilini for helpful comments

Video Interview: Jesper Ryberg on Neurointerventions, Crime and Punishment

Should neurotechnologies that affect emotional regulation, empathy and moral judgment, be used to prevent offenders from reoffending? Is it morally acceptable to offer more lenient sentences to offenders in return for participation in neuroscientific treatment programs? Or would this amount too coercion? Is it possible to administer neurointerventions as a type of punishment? Is it permissible for physicians to administer neurointerventions to offenders? Is there a risk that the dark history of compulsory brain interventions in offenders will repeat itself? In this interview Dr Katrien Devolder (Oxford), Professor Jesper Ryberg (Roskilde) argues that there are no good in-principle objections to using neurointerventions to prevent crime, BUT (!) that given the way criminal justice systems currently function, we should not currently use these interventions…

Making Ourselves Better

Written by Stephen Rainey

Human beings are sometimes seen as uniquely capable of enacting life plans and controlling our environment. Take technology, for instance; with it we make the world around us yield to our desires in various ways. Communication technologies, and global transport, for example, have the effect of practically shrinking a vast world, making hitherto impossible coordination possible among a global population. This contributes to a view of human-as-maker, or ‘homo faber‘. But taking such a view can risk minimising human interests that ought not to be ignored.

Homo faber is a future-oriented, adaptable, rational animal, whose efforts are aligned with her interests when she creates technology that enables a stable counteraction of natural circumstance. Whereas animals are typically seen to have well adapted responses to their environment, honed through generations of adaptation, human beings appear to have instead a general and adaptable skill that can emancipate them from material, external circumstances. We are bad at running away from danger, for instance, but good at building barriers to obviate the need to run. The protections this general, adaptable skill offer are inherently future-facing: humans seem to seek not to react to, but to control the environment.

Continue reading

Abolish Medical Ethics

Written by Charles Foster

In a recent blog post on this site Dom Wilkinson, writing about the case of Vincent Lambert, said this:

If, as is claimed by Vincent’s wife, Vincent would not have wished to remain alive, then the wishes of his parents, of other doctors or of the Pope, are irrelevant. My views or your views on the matter, likewise, are of no consequence. Only Vincent’s wishes matter. And so life support must stop.’

The post was (as everything Dom writes is), completely coherent and beautifully expressed. I say nothing here about my agreement or otherwise with his view – which is comfortably in accord with the zeitgeist, at least in the academy. My purpose is only to point out that if he is right, there is no conceivable justification for a department of medical ethics. Dom is arguing himself out of a job. Continue reading

Withdrawing Life Support: Only One Person’s View Matters

Dominic Wilkinson, University of Oxford

Shortly before Frenchman Vincent Lambert’s life support was due to be removed, doctors at Sebastopol Hospital in Reims, France, were ordered to stop. An appeal court ruled that life support must continue.

Lambert was seriously injured in a motorcycle accident in 2008 and has been diagnosed as being in a persistent vegetative state. Since 2014, his case has been heard many times in French and European courts.

His wife, who is his legal guardian, wishes artificial nutrition and hydration to be stopped and Vincent to be allowed to die. His parents are opposed to this. On Monday, May 20, the parents succeeded in a last-minute legal appeal to stop Vincent’s doctors from withdrawing feeding, pending a review by a UN Committee on the Rights of Persons with Disabilities.

Lambert’s case is the latest example of disputed treatment for adult patients with profound brain injury. The case has obvious parallels with that of Terri Schiavo, in the US who died in 2005 following seven years of legal battles. And there have been other similar high-profile cases over more than 40 years, including Elena Englaro (Italy, court cases 1999-2008), Tony Bland (UK 1993) Nancy Cruzan (US 1988-90) and Karen Ann Quinlan (US 1975-76). Continue reading

Regulating The Untapped Trove Of Brain Data

Written by Stephen Rainey and Christoph Bublitz

Increasing use of brain data, either from research contexts, medical device use, or in the growing consumer brain-tech sector raises privacy concerns. Some already call for international regulation, especially as consumer neurotech is about to enter the market more widely. In this post, we wish to look at the regulation of brain data under the GDPR and suggest a modified understanding to provide better protection of such data.

In medicine, the use of brain-reading devices is increasing, e.g. Brain-Computer-Interfaces that afford communication, control of neural or motor prostheses. But there is also a range of non-medical applications devices in development, for applications from gaming to the workplace.

Currently marketed ones, e.g. by Emotiv, Neurosky, are not yet widespread, which might be owing to a lack of apps or issues with ease of use, or perhaps just a lack of perceived need. However, various tech companies have announced their entrance to the field, and have invested significant sums. Kernel, a three year old multi-million dollar company based in Los Angeles, wants to ‘hack the human brain’. More recently, they are joined by Facebook, who want to develop a means of controlling devices directly with data derived from the brain (to be developed by their not-at-all-sinister sounding ‘Building 8’ group). Meanwhile, Elon Musk’s ‘Neuralink’ is a venture which aims to ‘merge the brain with AI’ by means of a ‘wizard hat for the brain’. Whatever that means, it’s likely to be based in recording and stimulating the brain.

Continue reading

In Praise Of Dementia

By Charles Foster

Statistically there is a good chance that I will ultimately develop dementia. It is one of the most feared conditions, but bring it on, I say.

It will strip me of some of my precious memories and some of my cognitive function, but it will also strip me of many of the neuroses that make life wretched. It may (but see below) make me anxious because the world takes on an unaccustomed form, but surely there are worse anxieties that are dependent on full function – such as hypochondriacal worries, or the worry that comes from watching the gradual march of a terminal illness. On balance the trade seems a good one. Continue reading

Neurointerventions, Disrespectful Messages, and the Right to be Listened to

Written by Gabriel De Marco

Neurointerventions can be roughly described as treatments or procedures that act directly on the physical properties of the brain in order to affect the subject’s psychological characteristics. The ethics of using neurointerventions can be quite complicated, and much of the discussion has revolved around the use of neurointerventions to improve the moral character of the subjects. Within this debate, there is a sub-debate concerning the use of enhancement techniques on criminal offenders. For instance, some jurisdictions make use of chemical castration, intended to reduce the subjects’ level of testosterone in order to reduce the likelihood of further sexual offenses. One particularly thorny question regards the use of neurointerventions on offenders without their consent. Here, I focus on just one version of one objection to the use of non-consensual neurocorrectives (NNs).

According to one style of objection, NNs are always impermissible because they express a disrespectful message. To be clear, the style objection I consider does not appeal to the potential consequences of expressing this message; rather, it relies on the claim that there is something intrinsic to the expression of such a message that gives us a reason (or reasons) for not performing an action that would express this message. For the use of non-consensual neurocorrectives, this reason (or set of reasons) is strong enough to make NNs impermissible. The particular version of this objection that I focus on claims that the disrespectful message is that the offender does not have a right to be listened to.

Continue reading

The Ethics of Gently Electrifying Prisoners’ Brains

By Hazem Zohny and Tom Douglas

Scientists who want to study the effects of passing electric currents through prisoners’ brains have a PR problem: it sounds shady. Even if that electric current is so small as to go largely unnoticed by its recipient – as in the case of transcranial direct current stimulation (tDCS) – for some, such experiments evoke historical abuses of neuroscience in criminal justice, not to mention bringing to mind some of the more haunting scenes in films like One Flew Over the Cuckoo’s Nest and A Clockwork Orange.

And so, last week the Spanish Interior Ministry put on hold an impending experiment in two Spanish prisons investigating the impact of brain stimulation on prisoners’ aggression. At the time of writing, it remains unclear what the ministry’s reasoning for the halt is, though the optics of the experiment might be part of the story.

Continue reading

Better Living Through Neurotechnology

Written by Stephen Rainey

If ‘neurotechnology’ isn’t a glamour area for researchers yet, it’s not far off. Technologies centred upon reading the brain are rapidly being developed. Among the claims made of such neurotechnologies are that some can provide special access to normally hidden representations of consciousness. Through recording, processing, and making operational brain signals we are promised greater understanding of our own brain processes. Since every conscious process is thought to be enacted, or subserved, or realised by a neural process, we get greater understanding of our consciousness.

Besides understanding, these technologies provide opportunities for cognitive optimisation and enhancement too. By getting a handle on our obscure cognitive processes, we can get the chance to manipulate them. By representing our own consciousness to ourselves, through a neurofeedback device for instance, we can try to monitor and alter the processes we witness, changing our minds in a very literal sense.

This looks like some kind of technological mind-reading, and perhaps too good to be true. Is neurotechnology overclaiming its prospects? Maybe more pressingly, is it understating its difficulties? Continue reading

Authors

Affiliations