Functional neo-Aristotelianism as a way to preserve moral agency: A response to Dr William Casebeer’s lecture: The Neuroscience of Moral Agency
Written by Dr Anibal Monasterio Astobiza
Audio File of Dr Casebeer’s talk is available here: http://media.philosophy.ox.ac.uk/uehiro/HT17_Casebeer.mp3
Dr. William Casebeer has an unusual, but nonetheless very interesting, professional career. He retired from active duty as a US Air Force Lieutenant Colonel and intelligence analyst. He obtained his PhD in Cognitive Science and Philosophy from University of California, San Diego, under the guidance and inspiration of Patricia and Paul Churchland, served as a Program Manager at the Defense Advanced Research Projects Agency from 2010-14 in the Defense Sciences Office and helped to established DARPA’s neuroethics program. Nowadays, Dr. William Casebeer is a Research Area Manager in Human Systems and Autonomy for Lockheed Martin’s Advanced Technology Laboratories. As I said, not the conventional path for a well known researcher with very prominent contributions in neuroethics and moral evolution. His book Natural Ethical Facts: Evolution, Connectionism, and Moral Cognition (MIT Press) presented a functional and neo-Aristotelian account of morality with a clever argument trying to solve G. E. Moore´s naturalistic fallacy: according to Casebeer it is possible to reduce what is good, or in other words morality, to natural facts.
Event: St Cross Special Ethics Seminar: The role of therapeutic optimism in recruitment to a clinical trial: an empirical study, presented by Dr Nina Hallowell
On Thursday 12 May 2016, Dr Nina Hallowell delivered the first St Cross Special Ethics Seminar of Trinity Term. The talk is available to listen to here http://media.philosophy.ox.ac.uk/uehiro/TT16_STX_Hallowell.mp3
Title: The role of therapeutic optimism in recruitment to a clinical trial: an empirical study Continue reading
Written by Richard Ngo , an undergraduate student in Computer Science and Philosophy at the University of Oxford.
Neil Levy’s Leverhulme Lectures start from the admirable position of integrating psychological results and philosophical arguments, with the goal of answering two questions:
(1) are we (those of us with egalitarian explicit beliefs but conflicting implicit attitudes) racist?
(2) when those implicit attitudes cause actions which seem appropriately to be characterised as racist (sexist, homophobic…), are we morally responsible for these actions? Continue reading
Author: Neil Levy, Leverhulme Visiting Professor
Podcasts of Prof Levy’s Leverhulme Lectures can be found here:
Fergus Peace’s responses to my lecturers are interesting and challenging. As he notes, in my lectures I focused on two questions:
(1) are we (those of us with egalitarian explicit beliefs but conflicting implicit attitudes) racist?
(2) When those attitudes cause actions which seem appropriately to be characterized as racist (sexist, homophobic…), are we morally responsible for these actions (more precisely, for the fact that they can be classified in these morally laden terms)?
He suggests that these questions simply are not important ones to ask. Getting clear on how we ought to respond to implicit biases (what steps we ought to take to mitigate their effects or to eliminate them) matters, but asking whether a certain label attaches to us does not. Nor does it matter whether we are morally responsible for the actions these attitudes cause.
The first challenge seems to me to be a good one. I will discuss that challenge after I have discussed the question concerning our moral responsibility. This challenge seems very much weaker.
Author: Fergus Peace, BPhil student, University of Oxford
Podcasts of Prof. Levy’s Leverhulme lectures are available here:
It’s only a little more than forty years ago that George Wallace won the contest for Governor of Alabama by running ads with slogans like “Wake up Alabama! Blacks vow to take over Alabama” and “Do you want the black bloc electing your governor?” That year, 1970, 50% of people surveyed in the American South said they would never – under any circumstances – vote for a black President. By 2012, that number was down by 8%, and it’s hard to deny that open, avowed racism has been in steep decline for most of the last forty years. But even as people’s overt commitment to racism declines, experiments still show that black candidates are less likely to be given job interviews than equally qualified white candidates; African-Americans are still disproportionately likely to be imprisoned, or shot by police.
So what’s going on? That is the motivating puzzle of Professor Neil Levy’s Leverhulme Lectures, and his answer centres on an increasingly well-known but still very disturbing psychological phenomenon: implicit bias. There are a range of tests which have uncovered evidence of implicit negative attitudes held – by a majority of white Americans, but a sizeable number of black Americans too – against black people. Harvard University’s ‘Project Implicit’ has a series of Implicit Association Tests (IATs); Keith Payne, among others, has developed tests of what he calls the Affect Misattribution Procedure (AMP). IATs ask us to sort faces and words according to their race and ‘valence’, and we find that task much easier when we have to associate black faces with negative words than we do otherwise. Tests of the AMP ask subjects to rate the pleasantness of an image which is entirely meaningless to them – a Chinese character, for people who don’t speak Chinese – and find that they rate it less pleasant if they’re shown an image of a black face immediately beforehand.
There’s no doubt these results are unsettling. (If you want to do an IAT online, as you should, you have to agree to receiving results you might disagree or be uncomfortable with before you proceed.) And they’re not just subconscious attitudes which are uncomfortable but insignificant; implicit bias as measured by these various tests is correlated with being less likely to vote for Barack Obama, and more likely to blame the black community for violence in protests against police brutality. Tests in virtual shooting ranges also reveal that it correlates with being more likely to shoot unarmed black men when given the task of shooting only those carrying weapons. Implicit biases certainly seem to cause, at least partly, racist actions and patterns of behaviour, like being quicker to shoot at unarmed black people and less likely to invite them for job interviews.
Professor Levy’s lectures grappled with two questions about these attitudes: first, do they make you a racist; and second, are you morally responsible for actions caused by your implicit biases? If you, like me, abhor racism and make that abhorrence at least some part of your political and social identity, but nonetheless come away with a “moderate automatic preference for European … compared to African” on the race IAT, then are you – protestations to the contrary – a racist? His answer to this question in the first lecture, based on the current state of conceptual investigation of what racism is and empirical evidence about the character of implicit biases, was a qualified no: they don’t clearly count as beliefs, or even as feelings, in a way that could let us confidently call people racist just because they possess them.
The second question is similarly complex. When interviewers prefer white applicants over equally qualified black ones, due to their implicit attitudes, are they responsible for the racist character of that action? Levy focused largely on the ‘control theory’ of moral responsibility, which says that you’re responsible for an action only if you exercise sufficient control over it. Levy’s answer to this question is a pretty clear no: implicit attitudes don’t have the right sort of attributes (in particular, reliable responsiveness to reasons and evidence) to count as giving you control over the actions they cause.
I find it very hard to disagree with the core of Professor Levy’s arguments on his two questions. The points I want to make in response come from a different direction, because after listening to the two lectures I’m not convinced that these are the important questions to be asking about implicit bias.
The Uehiro Centre for Practical Ethics (University of Oxford) and the Centre for Applied Philosophy and Public Ethics (Charles Sturt University) hosted a conference on conscientious objection in medicine and the role of conscience in healthcare practitioners’ decision making; The Conscience And Conscientious Objection In Healthcare Conference. It was held at the Oxford Martin School on the 23rd and 24th of November, organised by Julian Savulescu (University of Oxford), Alberto Giubilini (Charles Sturt University) and Steve Clarke (Charles Sturt University)
For the full program please follow this link.
The conference was aimed at analyzing from a philosophical, ethical and legal perspective the meaning and the role of “conscience” in the healthcare profession. Conscientious objection by health professionals has become one of the most pressing problems in healthcare ethics. Health professionals are often required to perform activities that conflict with their own moral or religious beliefs (for example abortion). Their refusal can make it difficult for patients to have access to services they have a right to and, more in general, can create conflicts in the doctor-patient relationship. The widening of the medical options available today or in the near future is likely to sharpen these conflicts. Experts in bioethics, philosophy, law and medicine explored possible solutions.
The conference was supported by the Uehiro Centre for Practical Ethics and an Australian Research Council Discovery Grant (DP 150102068). We are grateful to the Oxford Martin School for providing the venue for the conference.
On the Oxford Uehiro Centre for Practical Ethics website you will find both video and audio files of various commentaries and talks from the conference.
Podcast: Justifications for Non-Consensual Medical Intervention: From Infectious Disease Control to Criminal Rehabilitation
Dr Jonathan Pugh’s St Cross Special Ethics Seminar on 12 November 2015 is now available at http://media.philosophy.ox.ac.uk/uehiro/MT15_STX_Pugh.mp3
Speaker: Dr Jonathan Pugh
Although a central tenet of medical ethics holds that it is permissible to perform a medical intervention on a competent individual only if that individual has given informed consent to that intervention, there are some circumstances in which it seems that this moral requirement may be trumped. For instance, in some circumstances, it might be claimed that it is morally permissible to carry out certain sorts of non-consensual interventions on competent individuals for the purpose of infectious disease control (IDC). In this paper, I shall explain how one might defend this practice, and consider the extent to which similar considerations might be invoked in favour of carrying out non-consensual medical interventions for the purposes of facilitating rehabilitation amongst criminal offenders. Having considered examples of non-consensual interventions in IDC that seem to be morally permissible, I shall describe two different moral frameworks that a defender of this practice might invoke in order to justify such interventions. I shall then identify five desiderata that can be used to guide the assessments of the moral permissibility of non-consensual IDC interventions on either kind of fundamental justification. Following this analysis, I shall consider how the justification of non-consensual interventions for the purpose of IDC compares to the justification of non-consensual interventions for the purpose of facilitating criminal rehabilitation, according to these five desiderata. I shall argue that the analysis I provide suggests that a plausible case can be made in favour of carrying out certain sorts of non-consensual interventions for the purpose facilitating rehabilitation amongst criminal offenders.