Guest Post: Mind the accountability gap: On the ethics of shared autonomy between humans and intelligent medical devices
Guest Post by Philipp Kellmeyer
Imagine you had epilepsy and, despite taking a daily cocktail of several anti-epileptic drugs, still suffered several seizures per week, some minor, some resulting in bruises and other injuries. The source of your epileptic seizures lies in a brain region that is important for language. Therefore, your neurologist told you, epilepsy surgery – removing brain tissue that has been identified as the source of seizures in continuous monitoring with intracranial electroencephalography (iEEG) – is not viable in your case because it would lead to permanent damage to your language ability.
There is however, says your neurologist, an innovative clinical trial under way that might reduce the frequency and severity of your seizures. In this trial, a new device is implanted in your head that contains an electrode array for recording your brain activity directly from the brain surface and for applying small electric shocks to interrupt an impending seizure.
The electrode array connects wirelessly to a small computer that analyses the information from the electrodes to assess your seizure risk at any given moment in order to decide when to administer an electric shock. The neurologist informs you that trials with similar devices have achieved a reduction in the frequency of severe seizures in 50% of patients so that there would be a good chance that you benefit from taking part in the trial.
Now, imagine you decided to participate in the trial and it turns out that the device comes with two options: In one setting, you get no feedback on your current seizure risk by the device and the decision when to administer an electric shock to prevent an impending seizure is taken solely by the device.
This keeps you completely out of the loop in terms of being able to modify your behaviour according to your seizure risk and – in a sense – relegates some autonomy of decision-making to the intelligent medical device inside your head.
In the other setting, the system comes with a “traffic light” that signals your current risk level for a seizure, with green indicating a low, yellow a medium, and red a high probability of a seizure. In case of an evolving seizure, the device may additionally warn you with an alarm tone. In this scenario, you are kept in the loop and you retain your capacity to modify your behavior accordingly, for example to step from a ladder or stop riding a bike when you are “in the red.”
Why is chemical castration being used on offenders in some countries?

Following a horrific act of sexual violence against a 14-year-old girl, the president of Indonesia, Joko Widodo, recently signed a decree into law, which, among other things, authorised the death penalty for convicted child sex offenders, and also the use of chemical castration of such offenders.
The main justification cited by Widodo was that castration would act as a deterrent. But how do such interventions fit in the criminal justice system? Are they likely to be successful? Continue reading
Guest Post: Abortion, punishment and moral consistency
Written by: Rajiv Shah, PhD Candidate, Faculty of Law, University of Cambridge
Donald Trump suggested that women who have abortions should face punishment. For that he was criticised by both the pro-choice side and the pro-life side. The latter claimed that their view is that women should not face punishment for having abortions but that only providers should. This raises the interesting question of whether the pro-life position is coherent. It would seem that it is not. If the foetus has the right to life then having an abortion is like murder and so those who abort should be treated as such. This post argues that the pro-lifer can coherently reject this implication whilst still holding that the foetus has the right to life. Since it considers the responses a pro-lifer could make this post will assume for the sake of argument that the foetus does have a right to life. Continue reading
Video Series: Walter Sinnott-Armstrong on Moral Artificial Intelligence
Professor Walter Sinnott-Armstrong (Duke University and Oxford Martin Visiting Fellow) plans to develop a computer system (and a phone app) that will help us gain knowledge about human moral judgment and that will make moral judgment better. But will this moral AI make us morally lazy? Will it be abused? Could this moral AI take over the world? Professor Armstrong explains…
Reporting on a Recent Event: Conscience And Conscientious Objection In Healthcare Conference
The Uehiro Centre for Practical Ethics (University of Oxford) and the Centre for Applied Philosophy and Public Ethics (Charles Sturt University) hosted a conference on conscientious objection in medicine and the role of conscience in healthcare practitioners’ decision making; The Conscience And Conscientious Objection In Healthcare Conference. It was held at the Oxford Martin School on the 23rd and 24th of November, organised by Julian Savulescu (University of Oxford), Alberto Giubilini (Charles Sturt University) and Steve Clarke (Charles Sturt University)
For the full program please follow this link.
The conference was aimed at analyzing from a philosophical, ethical and legal perspective the meaning and the role of “conscience” in the healthcare profession. Conscientious objection by health professionals has become one of the most pressing problems in healthcare ethics. Health professionals are often required to perform activities that conflict with their own moral or religious beliefs (for example abortion). Their refusal can make it difficult for patients to have access to services they have a right to and, more in general, can create conflicts in the doctor-patient relationship. The widening of the medical options available today or in the near future is likely to sharpen these conflicts. Experts in bioethics, philosophy, law and medicine explored possible solutions.
The conference was supported by the Uehiro Centre for Practical Ethics and an Australian Research Council Discovery Grant (DP 150102068). We are grateful to the Oxford Martin School for providing the venue for the conference.
On the Oxford Uehiro Centre for Practical Ethics website you will find both video and audio files of various commentaries and talks from the conference.
If abolishing China’s one child policy led to more children, would it be so bad?
Written by Simon Beard
This is an unedited version of a paper which was originally published on The Conversation:
please see here to read the original article
After 35 years, the Chinese government recently announced the abolition of its controversial one child policy for one that will allow all Chinese citizens to have up to two children. Whilst this increased respect for personal autonomy is undoubtedly good, it is not clear if the lifting of the ban will actually lead to a marked increase in China’s birth rate – while the birth rate has dramatically reduced since the policy was introduced, so too have those of neighbouring countries without such policies.
Whether or not Chinese parents decide to use their new-found rights to procreate, the move does raise questions. Would it be good or bad if more children were now born in China and the population grew? And what value might there be in any changes to China’s population size and structure? Continue reading
Podcast: Justifications for Non-Consensual Medical Intervention: From Infectious Disease Control to Criminal Rehabilitation
Dr Jonathan Pugh’s St Cross Special Ethics Seminar on 12 November 2015 is now available at http://media.philosophy.ox.ac.uk/uehiro/MT15_STX_Pugh.mp3
Speaker: Dr Jonathan Pugh
Although a central tenet of medical ethics holds that it is permissible to perform a medical intervention on a competent individual only if that individual has given informed consent to that intervention, there are some circumstances in which it seems that this moral requirement may be trumped. For instance, in some circumstances, it might be claimed that it is morally permissible to carry out certain sorts of non-consensual interventions on competent individuals for the purpose of infectious disease control (IDC). In this paper, I shall explain how one might defend this practice, and consider the extent to which similar considerations might be invoked in favour of carrying out non-consensual medical interventions for the purposes of facilitating rehabilitation amongst criminal offenders. Having considered examples of non-consensual interventions in IDC that seem to be morally permissible, I shall describe two different moral frameworks that a defender of this practice might invoke in order to justify such interventions. I shall then identify five desiderata that can be used to guide the assessments of the moral permissibility of non-consensual IDC interventions on either kind of fundamental justification. Following this analysis, I shall consider how the justification of non-consensual interventions for the purpose of IDC compares to the justification of non-consensual interventions for the purpose of facilitating criminal rehabilitation, according to these five desiderata. I shall argue that the analysis I provide suggests that a plausible case can be made in favour of carrying out certain sorts of non-consensual interventions for the purpose facilitating rehabilitation amongst criminal offenders.
Guest Post: “Gambling should be fun, not a problem”: why strategies of self-control may be paradoxical.
Written by Melanie Trouessin
University of Lyon
Faced with issues related to gambling and games of chance, the Responsible Gambling program aims to promote moderate behaviour on the part of the player. It is about encouraging risk avoidance and offering self-limiting strategies, both temporal and financial, in order to counteract the player’s tendency to lose self-control. If this strategy rightly promotes individual autonomy, compared with other more paternalist measures, it also implies a particular position on the philosophical question of what is normal and what is pathological: a position of continuum. If we can subscribe in some measures of self-constraint in order to come back to a responsible namely moderate and controlled gambling, it implies there is not a huge gulf or qualitative difference between normal gaming and pathological gambling. Continue reading
Why It’s OK to Block Ads
Over the past couple of months, the practice of ad blocking has received heightened ethical scrutiny. (1,2,3,4)
If you’re unfamiliar with the term, “ad blocking” refers to software—usually web browser plug-ins, but increasingly mobile apps—that stop most ads from appearing when you use websites or apps that would otherwise show them.
Arguments against ad blocking tend to focus on the potential economic harms. Because advertising is the dominant business model on the internet, if everyone used ad-blocking software then wouldn’t it all collapse? If you don’t see (or, in some cases, click on) ads, aren’t you getting the services you currently think of as “free”—actually for free? By using ad-blocking, aren’t you violating an agreement you have with online service providers to let them show you ads in exchange for their services? Isn’t ad blocking, as the industry magazine AdAge has called it, “robbery, plain and simple”? Continue reading
Guest Post: A feminist defence of the nanny state
Written by Anke Snoek
Macquarie University
In Australia Senator David Leyonhjelm http://www.theaustralian.com.au/national-affairs/david-leyonhjelm-declares-war-on-nanny-state/story-fn59niix-1227415288323 has won support for a broad-ranging parliamentary inquiry into what he calls the ‘nanny state’. A committee will test the claims of public health experts about bicycle helmets, alcohol laws, violent video games, the sale and use of alcohol, tobacco and pornography. “If we don’t wind back this nanny state, the next thing you know they’ll be introducing rules saying that you’ll need to have a fresh hanky and clean underpants”. Continue reading
Recent Comments