Guest Post: Mind the accountability gap: On the ethics of shared autonomy between humans and intelligent medical devices
Guest Post by Philipp Kellmeyer
Imagine you had epilepsy and, despite taking a daily cocktail of several anti-epileptic drugs, still suffered several seizures per week, some minor, some resulting in bruises and other injuries. The source of your epileptic seizures lies in a brain region that is important for language. Therefore, your neurologist told you, epilepsy surgery – removing brain tissue that has been identified as the source of seizures in continuous monitoring with intracranial electroencephalography (iEEG) – is not viable in your case because it would lead to permanent damage to your language ability.
There is however, says your neurologist, an innovative clinical trial under way that might reduce the frequency and severity of your seizures. In this trial, a new device is implanted in your head that contains an electrode array for recording your brain activity directly from the brain surface and for applying small electric shocks to interrupt an impending seizure.
The electrode array connects wirelessly to a small computer that analyses the information from the electrodes to assess your seizure risk at any given moment in order to decide when to administer an electric shock. The neurologist informs you that trials with similar devices have achieved a reduction in the frequency of severe seizures in 50% of patients so that there would be a good chance that you benefit from taking part in the trial.
Now, imagine you decided to participate in the trial and it turns out that the device comes with two options: In one setting, you get no feedback on your current seizure risk by the device and the decision when to administer an electric shock to prevent an impending seizure is taken solely by the device.
This keeps you completely out of the loop in terms of being able to modify your behaviour according to your seizure risk and – in a sense – relegates some autonomy of decision-making to the intelligent medical device inside your head.
In the other setting, the system comes with a “traffic light” that signals your current risk level for a seizure, with green indicating a low, yellow a medium, and red a high probability of a seizure. In case of an evolving seizure, the device may additionally warn you with an alarm tone. In this scenario, you are kept in the loop and you retain your capacity to modify your behavior accordingly, for example to step from a ladder or stop riding a bike when you are “in the red.”
* Note: this article was first published online at Quillette magazine.
Alice Dreger, the historian of science, sex researcher, activist, and author of a much-discussed book of last year, has recently called attention to the loss of ambivalence as an acceptable attitude in contemporary politics and beyond. “Once upon a time,” she writes, “we were allowed to feel ambivalent about people. We were allowed to say, ‘I like what they did here, but that bit over there doesn’t thrill me so much.’ Those days are gone. Today the rule is that if someone—a scientist, a writer, a broadcaster, a politician—does one thing we don’t like, they’re dead to us.”
I’m going to suggest that this development leads to another kind of loss: the loss of our ability to work together, or better, learn from each other, despite intense disagreement over certain issues. Whether it’s because our opponent hails from a different political party, or voted differently on a key referendum, or thinks about economics or gun control or immigration or social values—or whatever—in a way we struggle to comprehend, our collective habit of shouting at each other with fingers stuffed in our ears has reached a breaking point.
It’s time to bring ambivalence back. Continue reading
Quick announcement: A podcast interview between Brian D. Earp (a.k.a. myself) and J. J. Chipchase for Naturalistic Philosophy has just been released: we talk about the relationship between science and morality, the is/ought distinction, free will, the replication crisis in science and medicine, problems with peer review, bullshit in academia, and Sam Harris’s The Moral Landscape, among other things. Check it out here:
Science and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent concerns about sloppy research, small sample sizes, and challenges in replicating major findings—concerns I share and which I have written about at length — I still believe that the scientific method is the best available tool for getting at empirical truth. Or to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst tool, except for all the rest.
Scientists are people too
In other words, science is flawed. And scientists are people too. While it is true that most scientists — at least the ones I know and work with — are hell-bent on getting things right, they are not therefore immune from human foibles. If they want to keep their jobs, at least, they must contend with a perverse “publish or perish” incentive structure that tends to reward flashy findings and high-volume “productivity” over painstaking, reliable research. On top of that, they have reputations to defend, egos to protect, and grants to pursue. They get tired. They get overwhelmed. They don’t always check their references, or even read what they cite. They have cognitive and emotional limitations, not to mention biases, like everyone else.
At the same time, as the psychologist Gary Marcus has recently put it, “it is facile to dismiss science itself. The most careful scientists, and the best science journalists, realize that all science is provisional. There will always be things that we haven’t figured out yet, and even some that we get wrong.” But science is not just about conclusions, he argues, which are occasionally (or even frequently) incorrect. Instead, “It’s about a methodology for investigation, which includes, at its core, a relentless drive towards questioning that which came before.” You can both “love science,” he concludes, “and question it.”
I agree with Marcus. In fact, I agree with him so much that I would like to go a step further: if you love science, you had better question it, and question it well, so it can live up to its potential.
And it is with that in mind that I bring up the subject of bullshit.
Should vegans eat meat to be ethically consistent? And other moral puzzles from the latest issue of the Journal of Practical Ethics
Should vegans eat meat to be ethically consistent? And other moral puzzles from the latest issue of the Journal of Practical Ethics
By Brian D. Earp (@briandavidearp)
The latest issue of The Journal of Practical Ethics has just been published online, and it includes several fascinating essays (see the abstracts below). In this blog post, I’d like to draw attention to one of them in particular, because it seemed to me to be especially creative and because it was written by an undergraduate student! The essay – “How Should Vegans Live?” – is by Oxford student Xavier Cohen. I had the pleasure of meeting Xavier several months ago when he presented an earlier draft of his essay at a lively competition in Oxford: he and several others were finalists for the Oxford Uehiro Prize in Practical Ethics, for which I was honored to serve as one of the judges.
In a nutshell, Xavier argues that ethical vegans – that is, vegans who refrain from eating animal products specifically because they wish to reduce harm to animals – may actually be undermining their own aims. This is because, he argues, many vegans are so strict about the lifestyle they adopt (and often advocate) that they end up alienating people who might otherwise be willing to make less-drastic changes to their behavior that would promote animal welfare overall. Moreover, by focusing too narrowly on the issue of directly refraining from consuming animal products, vegans may fail to realize how other actions they take may be indirectly harming animals, perhaps even to a greater degree.
Please note: this blog is was first published at the Journal of Medical Ethics Blog.
The Journal of Medical Ethics is pleased to announce the addition of a new article type – Extended Essays – that will allow authors up to 7,000 words to provide an in-depth analysis of their chosen topic.
In an interview, Associate Editor Tom Douglas said the new category was created “in recognition of the fact that some topics warrant sustained and nuanced analysis of a sort that can’t be laid out in less than 3,500 words.”
He went on to say that at the Journal of Medical Ethics “we don’t want to miss out on the best papers in medical ethics, many of which currently get sent elsewhere simply because of our strict word limits.”
1 in 4 women: How the latest sexual assault statistics were turned into click bait by the New York Times
* Note: this article was originally published at the Huffington Post.
As someone who has worked on college campuses to educate men and women about sexual assault and consent, I have seen the barriers to raising awareness and changing attitudes. Chief among them, in my experience, is a sense of skepticism–especially among college-aged men–that sexual assault is even all that dire of a problem to begin with.
“1 in 4? 1 in 5? Come on, it can’t be that high. That’s just feminist propaganda!”
A lot of the statistics that get thrown around in this area (they seem to think) have more to do with politics and ideology than with careful, dispassionate science. So they often wave away the issue of sexual assault–and won’t engage on issues like affirmative consent.
In my view, these are the men we really need to reach.
A new statistic
So enter the headline from last week’s New York Times coverage of the latest college campus sexual assault survey:
But that’s not what the survey showed. And you don’t have to read all 288 pages of the published report to figure this out (although I did that today just to be sure). The executive summary is all you need.
Just out today is a podcast interview for Smart Drug Smarts between host Jesse Lawler and interviewee Brian D. Earp on “The Medicalization of Love” (title taken from a recent paper with Anders Sandberg and Julian Savulescu, available from the Cambridge Quarterly of Healthcare Ethics, here).
Below is the abstract and link to the interview:
What is love? A loaded question with the potential to lead us down multiple rabbit holes (and, if you grew up in the 90s, evoke memories of the Haddaway song). In episode #95, Jesse welcomes Brian D. Earp on board for a thought-provoking conversation about the possibilities and ethics of making biochemical tweaks to this most celebrated of human emotions. With a topic like “manipulating love,” the discussion moves between the realms of neuroscience, psychology and transhumanist philosophy.
Earp, B. D., Sandberg, A., & Savulescu, J. (2015). The medicalization of love. Cambridge Quarterly of Healthcare Ethics, Vol. 24, No. 3, 323–336.
*Note that this article was originally published at the Huffington Post.
In the New York Times yesterday, psychologist Lisa Feldman Barrett argues that “Psychology is Not in Crisis.” She is responding to the results of a large-scale initiative called the Reproducibility Project, published in Science magazine, which appeared to show that the findings from over 60 percent of a sample of 100 psychology studies did not hold up when independent labs attempted to replicate them.
She argues that “the failure to replicate is not a cause for alarm; in fact, it is a normal part of how science works.” To illustrate this point, she gives us the following scenario:
Suppose you have two well-designed, carefully run studies, A and B, that investigate the same phenomenon. They perform what appear to be identical experiments, and yet they reach opposite conclusions. Study A produces the predicted phenomenon, whereas Study B does not. We have a failure to replicate.
Does this mean that the phenomenon in question is necessarily illusory? Absolutely not. If the studies were well designed and executed, it is more likely that the phenomenon from Study A is true only under certain conditions. The scientist’s job now is to figure out what those conditions are, in order to form new and better hypotheses to test.
She’s making a pretty big assumption here, which is that the studies we’re interested in are “well-designed” and “carefully run.” But a major reason for the so-called “crisis” in psychology — and I’ll come back to the question of just what kind of crisis we’re really talking about (see my title) — is the fact that a very large number of not-well-designed, and not-carefully-run studies have been making it through peer review for decades.
Small sample sizes, sketchy statistical procedures, incomplete reporting of experiments, and so on, have been pretty convincingly shown to be widespread in the field of psychology (and in other fields as well), leading to the publication of a resource-wastingly large percentage of “false positives” (read: statistical noise that happens to look like a real result) in the literature.
In a recent issue of the Journal of Medical Ethics, Thomas Ploug and Søren Holm point out that scientific communities can sometimes get pretty polarized. This happens when two different groups of researchers consistently argue for (more or less) opposite positions on some hot-button empirical issue.
The examples they give are: debates over the merits of breast cancer screening and the advisability of prescribing statins to people at low risk of heart disease. Other examples come easily to mind. The one that pops into my head is the debate over the health benefits vs. risks of male circumcision—which I’ve covered in some detail here, here, here, here, and here.
When I first starting writing about this issue, I was pretty “polarized” myself. But I’ve tried to step back over the years to look for middle ground. Once you realize that your arguments are getting too one-sided, it’s hard to go on producing them without making some adjustments. At least, it is without losing credibility — and no small measure of self-respect.
This point will become important later on.
Nota bene! According to Ploug and Holm, disagreement is not the same as polarization. Instead, polarization only happens when researchers:
(1) Begin to self-identify as proponents of a particular position that needs to be strongly defended beyond what is supported by the data, and
(2) Begin to discount arguments and data that would normally be taken as important in a scientific debate.
But wait a minute. Isn’t there something peculiar about point number (1)?
On the one hand, it’s framed in terms of self-identification, so: “I see myself as a proponent of a particular position that needs to be strongly defended.” Ok, that much makes sense. But then it makes it sound like this position-defending has to go “beyond what is supported by the data.”
But who would self-identify as someone who makes inadequately supported arguments?
We might chalk this up to ambiguous phrasing. Maybe the authors mean that (in order for polarization to be diagnosed) researchers have to self-identify as “proponents of a particular position,” while the part about “beyond the data” is what an objective third-party would say about the researchers (even if that’s not what they would say about themselves). It’s hard to know for sure.
But the issue of self-identification is going to come up again in a minute, because I think it poses a big problem for Ploug and Holm’s ultimate proposal for how to combat polarization. To see why, though, I have to say a little bit more about what their overall suggestion is in the first place.