Guest Post: Mind the accountability gap: On the ethics of shared autonomy between humans and intelligent medical devices
Guest Post by Philipp Kellmeyer
Imagine you had epilepsy and, despite taking a daily cocktail of several anti-epileptic drugs, still suffered several seizures per week, some minor, some resulting in bruises and other injuries. The source of your epileptic seizures lies in a brain region that is important for language. Therefore, your neurologist told you, epilepsy surgery – removing brain tissue that has been identified as the source of seizures in continuous monitoring with intracranial electroencephalography (iEEG) – is not viable in your case because it would lead to permanent damage to your language ability.
There is however, says your neurologist, an innovative clinical trial under way that might reduce the frequency and severity of your seizures. In this trial, a new device is implanted in your head that contains an electrode array for recording your brain activity directly from the brain surface and for applying small electric shocks to interrupt an impending seizure.
The electrode array connects wirelessly to a small computer that analyses the information from the electrodes to assess your seizure risk at any given moment in order to decide when to administer an electric shock. The neurologist informs you that trials with similar devices have achieved a reduction in the frequency of severe seizures in 50% of patients so that there would be a good chance that you benefit from taking part in the trial.
Now, imagine you decided to participate in the trial and it turns out that the device comes with two options: In one setting, you get no feedback on your current seizure risk by the device and the decision when to administer an electric shock to prevent an impending seizure is taken solely by the device.
This keeps you completely out of the loop in terms of being able to modify your behaviour according to your seizure risk and – in a sense – relegates some autonomy of decision-making to the intelligent medical device inside your head.
In the other setting, the system comes with a “traffic light” that signals your current risk level for a seizure, with green indicating a low, yellow a medium, and red a high probability of a seizure. In case of an evolving seizure, the device may additionally warn you with an alarm tone. In this scenario, you are kept in the loop and you retain your capacity to modify your behavior accordingly, for example to step from a ladder or stop riding a bike when you are “in the red.”
Written by Richard Ngo , an undergraduate student in Computer Science and Philosophy at the University of Oxford.
Neil Levy’s Leverhulme Lectures start from the admirable position of integrating psychological results and philosophical arguments, with the goal of answering two questions:
(1) are we (those of us with egalitarian explicit beliefs but conflicting implicit attitudes) racist?
(2) when those implicit attitudes cause actions which seem appropriately to be characterised as racist (sexist, homophobic…), are we morally responsible for these actions? Continue reading
Loebel Lectures and Workshop, Michaelmas Term 2015, Lecture 1 of 3: Neurobiological materialism collides with the experience of being human
The 2015 Loebel Lectures in Psychiatry and Philosophy were delivered by Professor Steven E. Hyman, director of the Stanley Center for Psychiatric Research at the Broad Institute of MIT and Harvard as well as Harvard University Distinguished Service Professor of Stem Cell and Regenerative Biology. Both the lecture series and the one-day workshop proved popular and were well-attended. Continue reading
Written by Anke Snoek
In the UK around 500 soldiers each year get fired because they fail drug-testing. The substances they use are mainly recreational drugs like cannabis, XTC, and cocaine. Some call this a waste of resources, since new soldiers have to be recruited and trained, and call for a revision of the zero tolerance policy on substance use in the army.
This policy stems from the Vietnam war. During the First and Second World War, it was almost considered cruel to deny soldiers alcohol. The use of alcohol was seen as a necessary coping mechanism for soldiers facing the horrors of the battlefield. The public opinion on substance use by soldiers changed radically during the Vietnam War. Influenced by the anti-war movement, the newspapers then were dominated by stories of how stoned soldiers fired at their own people, and how the Vietnamese sold opioids to the soldiers to make them less capable of doing their jobs. Although Robins (1974) provided evidence that the soldiers used the opioids in a relatively safe way, and that they were enhancing rather than impairing the soldiers’ capacities, the public opinion on unregulated drug use in the army was irrevocably changed. Continue reading
Just out today is a podcast interview for Smart Drug Smarts between host Jesse Lawler and interviewee Brian D. Earp on “The Medicalization of Love” (title taken from a recent paper with Anders Sandberg and Julian Savulescu, available from the Cambridge Quarterly of Healthcare Ethics, here).
Below is the abstract and link to the interview:
What is love? A loaded question with the potential to lead us down multiple rabbit holes (and, if you grew up in the 90s, evoke memories of the Haddaway song). In episode #95, Jesse welcomes Brian D. Earp on board for a thought-provoking conversation about the possibilities and ethics of making biochemical tweaks to this most celebrated of human emotions. With a topic like “manipulating love,” the discussion moves between the realms of neuroscience, psychology and transhumanist philosophy.
Earp, B. D., Sandberg, A., & Savulescu, J. (2015). The medicalization of love. Cambridge Quarterly of Healthcare Ethics, Vol. 24, No. 3, 323–336.
Written by Anke Snoek
When neuroscience started to mingle into the debate on addiction and self-control, people aimed to use these insights to cause a paradigm shift in how we judge people struggling with addictions. People with addictions are not morally despicable or weak-willed, they end up addicted because drugs influence the brain in a certain way. Anyone with a brain can become addicted, regardless their morals. The hope was that this realisation would reduce the stigma that surrounds addiction. Unfortunately, the hoped for paradigm shift didn’t really happen, because most people interpreted this message as: people with addictions have deviant brains, and this view provides a reason to stigmatise them in a different way. Continue reading
Written by Benjamin Pojer and Daniel D’Hotman
Faculty of Medicine, Nursing and Health Science, Monash University
Oxford Uehiro Centre for Practical Ethics, University of Oxford
A recent review published in the European Journal of Neuropsychopharmacology (1) on the efficacy and safety of modafinil in a population of healthy people has found that the drug “appears to consistently engender enhancement of attention, executive functions, and learning” without “preponderances for side effects or mood changes”. Modafinil, a medication prescribed in the treatment of narcolepsy and other sleep disorders, has gained popularity in recent years as a means of increasing alertness and focus. Informal surveys suggest that up to one in five undergraduate university students in the UK admit to using the drug as a study aid (2). Previously, the unknown safety profile of modafinil has been an obstacle to its more widespread use as a cognitive enhancer. Admittedly, the long-term consequences of modafinil use remain unclear, however, given its growing popularity, this gap in the literature should not preclude a discussion of the ethics of the drug’s use for cognitive enhancement. Continue reading
Written by Dr John Danaher.
Dr Danaher is a Lecturer in Law at NUI Galway. His research interests include neuroscience and law, human enhancement, and the ethics of artificial intelligence.
A version of this post was previously published here.
Somebody recently sent me a link to an article by Jed Radoff entitled “Why Innocent People Plead Guilty”. Radoff’s article is an indictment of the plea-bargaining system currently in operation in the US. Unsurprisingly given its title, it argues that the current system of plea bargaining encourages innocent people to plead guilty, and that something must be done to prevent this from happening.
I recently published a paper addressing the same problem. The gist of its argument is that I think that it may be possible to use a certain type of brain-based lie detection — the P300 Concealed Information Test (P300 CIT) — to rectify some of the problems inherent in systems of plea bargaining. The word “possible” is important here. I don’t believe that the technology is currently ready to be used in this way – I think further field testing needs to take place – but I don’t think the technology is as far away as some people might believe either.
What I find interesting is that, despite this, there is considerable resistance to the use of the P300 CIT in academic and legal circles. Some of that resistance stems from unwarranted fealty to the status quo, and some stems from legitimate concerns about potential abuses of the technology (miscarriages of justice etc.). I try to overcome some of this resistance by suggesting that the P300 CIT might be better than other proposed methods for resolving existing abuses of power within the system. Hence my focus on plea-bargaining and the innocence problem.