Skip to content

Guest Post: Mind the accountability gap: On the ethics of shared autonomy between humans and intelligent medical devices

Guest Post by Philipp Kellmeyer

Imagine you had epilepsy and, despite taking a daily cocktail of several anti-epileptic drugs, still suffered several seizures per week, some minor, some resulting in bruises and other injuries. The source of your epileptic seizures lies in a brain region that is important for language. Therefore, your neurologist told you, epilepsy surgery – removing brain tissue that has been identified as the source of seizures in continuous monitoring with intracranial electroencephalography (iEEG) – is not viable in your case because it would lead to permanent damage to your language ability.

There is however, says your neurologist, an innovative clinical trial under way that might reduce the frequency and severity of your seizures. In this trial, a new device is implanted in your head that contains an electrode array for recording your brain activity directly from the brain surface and for applying small electric shocks to interrupt an impending seizure.

The electrode array connects wirelessly to a small computer that analyses the information from the electrodes to assess your seizure risk at any given moment in order to decide when to administer an electric shock. The neurologist informs you that trials with similar devices have achieved a reduction in the frequency of severe seizures in 50% of patients so that there would be a good chance that you benefit from taking part in the trial.

Now, imagine you decided to participate in the trial and it turns out that the device comes with two options: In one setting, you get no feedback on your current seizure risk by the device and the decision when to administer an electric shock to prevent an impending seizure is taken solely by the device.

This keeps you completely out of the loop in terms of being able to modify your behaviour according to your seizure risk and – in a sense – relegates some autonomy of decision-making to the intelligent medical device inside your head.

In the other setting, the system comes with a “traffic light” that signals your current risk level for a seizure, with green indicating a low, yellow a medium, and red a high probability of a seizure. In case of an evolving seizure, the device may additionally warn you with an alarm tone. In this scenario, you are kept in the loop and you retain your capacity to modify your behavior accordingly, for example to step from a ladder or stop riding a bike when you are “in the red.”

I set up this clinical case to illustrate the concept of shared autonomy between humans and medical devices that the new breed of intelligent systems, often based on machine learning algorithms, offers for the diagnosis and treatment of medical conditions.

So what are the ethical challenges that arise from such novel forms of human-machine interaction? In a recent paper in the Cambridge Quarterly of Healthcare Ethics we have addressed these and other issues.1 Here I want to briefly highlight the issue of an accountability gap that may result from the relegation of decision-making capacity to intelligent systems in medicine. In case you wondered, medical devices with the capacity to (semi)autonomously administer an intervention, such as electric shocks for dangerous heart rhythms (arrhythmia), for epileptic seizure control, intelligent insulin pumps for diabetes control, or brain-computer interfaces (BCI) for severely paralyzed patients are intensely researched and developed.

As our initial case shows, keeping a patient in the loop may help to retain her decision-making capacity and thus assert autonomy. Nevertheless, for some individuals, the convenience of relegating these decisions to an intelligent device, and perhaps thus not being constantly reminded of their medical condition, may well be more important than their feeling of agency and autonomy. For a severely paralyzed patient who is unable to speak or write, an intelligent BCI that restores her ability to communicate via computerized spelling will also likely have a profound and positive impact on her psychological well-being and capacity to assert her autonomy. Importantly, we do not have sufficient empirical data from structured interviews or focus group discussion to give a representative account yet of the spectrum of attitudes and feelings of the patients wearing such devices.

Now, concerning the accountability for one’s action(s): to the degree to which patients are willing to transfer decision-making capacity to an intelligent system, their accountability may diminish accordingly. But to whom (or what) is this accountability transferred? To the intelligent device, its engineer, the manufacturer, the regulatory body? Or, is accountability in such scenarios shared so diffusely that it becomes difficult to hold any one agent responsible in cases of catastrophic system failure? We discuss these and other questions in the paper and go on to make some tentative suggestions towards how this accountability gap should be addressed in terms of political regulation and oversight of emerging neurotechnologies.

Furthermore, neurotechnological devices that interact with prostheses such as exoskeletons, artificial limbs or robotic arms, may also produce interesting neurophenomenological effects concerning embodied perception, the user’s body schemata, and perhaps even overall personal identity that remain to be explored in systematic studies.

Reference

  1. Kellmeyer, P. et al. The Effects of Closed-Loop Medical Devices on the Autonomy and Accountability of Persons and Systems. Camb. Q. Healthc. Ethics, 25, 623–633 (2016).

Dr. med. Philipp Kellmeyer, M.D., M.Phil. (Cantab) is a board-certified neurologist currently working as postdoctoral researcher on developing a wireless brain-computer interface for severely paralyzed patients at the University of Freiburg, Germany. In neuroethics, he works on ethical issues of emerging neurotechnologies. He is a member of the Rapid Action Task Force of the International Neuroethics Society and the Advisory Committee of the Neuroethics Network.

Share on