Oxford Uehiro Prize in Practical Ethics: Should We Take Moral Advice From Our Computers? written by Mahmoud Ghanem
This essay received an Honourable Mention in the undergraduate category of the Oxford Uehiro Prize in Practical Ethics.
Written by University of Oxford student, Mahmoud Ghanem
The Case For Computer Assisted Ethics
In the interest of rigour, I will avoid use of the phrase “Artificial Intelligence”, though many of the techniques I will discuss, namely statistical inference and automated theorem proving underpin most of what is described as “AI” today.
Whether we believe that the goal of moral actions ought to be to form good habits, to maximise some quality in the world, to follow the example of certain role models, or to adhere to some set of rules or guiding principles, a good case for consulting a well designed computer program in the process of making our moral decisions can be made. After all, the process of carrying out each of the above successfully at least requires:
(1) Access to relevant and accurate data, and
(2) The ability to draw accurate conclusions by analysing such data.
Both of which are things that computers are very good at. Continue reading
Author: Neil Levy, Leverhulme Visiting Professor
Podcasts of Prof Levy’s Leverhulme Lectures can be found here:
Fergus Peace’s responses to my lecturers are interesting and challenging. As he notes, in my lectures I focused on two questions:
(1) are we (those of us with egalitarian explicit beliefs but conflicting implicit attitudes) racist?
(2) When those attitudes cause actions which seem appropriately to be characterized as racist (sexist, homophobic…), are we morally responsible for these actions (more precisely, for the fact that they can be classified in these morally laden terms)?
He suggests that these questions simply are not important ones to ask. Getting clear on how we ought to respond to implicit biases (what steps we ought to take to mitigate their effects or to eliminate them) matters, but asking whether a certain label attaches to us does not. Nor does it matter whether we are morally responsible for the actions these attitudes cause.
The first challenge seems to me to be a good one. I will discuss that challenge after I have discussed the question concerning our moral responsibility. This challenge seems very much weaker.
Author: Fergus Peace, BPhil student, University of Oxford
Podcasts of Prof. Levy’s Leverhulme lectures are available here:
It’s only a little more than forty years ago that George Wallace won the contest for Governor of Alabama by running ads with slogans like “Wake up Alabama! Blacks vow to take over Alabama” and “Do you want the black bloc electing your governor?” That year, 1970, 50% of people surveyed in the American South said they would never – under any circumstances – vote for a black President. By 2012, that number was down by 8%, and it’s hard to deny that open, avowed racism has been in steep decline for most of the last forty years. But even as people’s overt commitment to racism declines, experiments still show that black candidates are less likely to be given job interviews than equally qualified white candidates; African-Americans are still disproportionately likely to be imprisoned, or shot by police.
So what’s going on? That is the motivating puzzle of Professor Neil Levy’s Leverhulme Lectures, and his answer centres on an increasingly well-known but still very disturbing psychological phenomenon: implicit bias. There are a range of tests which have uncovered evidence of implicit negative attitudes held – by a majority of white Americans, but a sizeable number of black Americans too – against black people. Harvard University’s ‘Project Implicit’ has a series of Implicit Association Tests (IATs); Keith Payne, among others, has developed tests of what he calls the Affect Misattribution Procedure (AMP). IATs ask us to sort faces and words according to their race and ‘valence’, and we find that task much easier when we have to associate black faces with negative words than we do otherwise. Tests of the AMP ask subjects to rate the pleasantness of an image which is entirely meaningless to them – a Chinese character, for people who don’t speak Chinese – and find that they rate it less pleasant if they’re shown an image of a black face immediately beforehand.
There’s no doubt these results are unsettling. (If you want to do an IAT online, as you should, you have to agree to receiving results you might disagree or be uncomfortable with before you proceed.) And they’re not just subconscious attitudes which are uncomfortable but insignificant; implicit bias as measured by these various tests is correlated with being less likely to vote for Barack Obama, and more likely to blame the black community for violence in protests against police brutality. Tests in virtual shooting ranges also reveal that it correlates with being more likely to shoot unarmed black men when given the task of shooting only those carrying weapons. Implicit biases certainly seem to cause, at least partly, racist actions and patterns of behaviour, like being quicker to shoot at unarmed black people and less likely to invite them for job interviews.
Professor Levy’s lectures grappled with two questions about these attitudes: first, do they make you a racist; and second, are you morally responsible for actions caused by your implicit biases? If you, like me, abhor racism and make that abhorrence at least some part of your political and social identity, but nonetheless come away with a “moderate automatic preference for European … compared to African” on the race IAT, then are you – protestations to the contrary – a racist? His answer to this question in the first lecture, based on the current state of conceptual investigation of what racism is and empirical evidence about the character of implicit biases, was a qualified no: they don’t clearly count as beliefs, or even as feelings, in a way that could let us confidently call people racist just because they possess them.
The second question is similarly complex. When interviewers prefer white applicants over equally qualified black ones, due to their implicit attitudes, are they responsible for the racist character of that action? Levy focused largely on the ‘control theory’ of moral responsibility, which says that you’re responsible for an action only if you exercise sufficient control over it. Levy’s answer to this question is a pretty clear no: implicit attitudes don’t have the right sort of attributes (in particular, reliable responsiveness to reasons and evidence) to count as giving you control over the actions they cause.
I find it very hard to disagree with the core of Professor Levy’s arguments on his two questions. The points I want to make in response come from a different direction, because after listening to the two lectures I’m not convinced that these are the important questions to be asking about implicit bias.
Professor Walter Sinnott-Armstrong (Duke University and Oxford Martin Visiting Fellow) plans to develop a computer system (and a phone app) that will help us gain knowledge about human moral judgment and that will make moral judgment better. But will this moral AI make us morally lazy? Will it be abused? Could this moral AI take over the world? Professor Armstrong explains…
Professor Neil Levy, visiting Leverhulme Lecturer, University of Oxford, has recently published a provocative essay at Aeon online magazine:
Human beings are a punitive species. Perhaps because we are social animals, and require the cooperation of others to achieve our goals, we are strongly disposed to punish those who take advantage of us. Those who ‘free-ride’, taking benefits to which they are not entitled, are subject to exclusion, the imposition of fines or harsher penalties. Wrongdoing arouses strong emotions in us, whether it is done to us, or to others. Our indignation and resentment have fuelled a dizzying variety of punitive practices – ostracism, branding, beheading, quartering, fining, and very many more. The details vary from place to place and time to culture but punishment has been a human universal, because it has been in our evolutionary interests. However, those evolutionary impulses are crude guides to how we should deal with offenders in contemporary society.
Our moral emotions fuel our impulses toward retribution. Retributivists believe that people should be punished because that’s what they deserve. Retributivism is not the only justification for punishment, of course. We also punish to deter others, to prevent the person offending again, and perhaps to rehabilitate the offender. But these consequentialist grounds alone cannot justify our current system of criminal justice. We want punishments to ‘fit the crime’ – the worse the crime, the worse the punishment – without regard for the evidence of whether it ‘works’, that is, without thinking about punishment in consequentialist terms.
See here for the full article, and to join in the conversation.
Professor Levy has also written on this topic in the Journal of Practical Ethics; Less Blame, Less Crime? The Practical Implications of Moral Responsibility Skepticism.
Pedro Jesús Pérez Zafrilla.
Lecturer in Moral Philosophy.
Department of Moral Philosophy.
(University of Valencia)
The development of neurosciences has had a major impact on the field of philosophy. In this respect, Spanish philosophy is no exception. In particular, the Valencia School led by Adela Cortina has played a leading part in the momentum of neuroethics in Spain. Our research has included the tackling of various areas such as human enhancement, free will or moral psychology. My intention in this post is to briefly present a critique referring to cognitive psychology. Specifically, I want to argue that moral dilemmas are not an appropriate method of analysing moral judgment. In my opinion dilemmas are misrepresentations of the way in which people form their moral judgments. Continue reading
Should vegans eat meat to be ethically consistent? And other moral puzzles from the latest issue of the Journal of Practical Ethics
Should vegans eat meat to be ethically consistent? And other moral puzzles from the latest issue of the Journal of Practical Ethics
By Brian D. Earp (@briandavidearp)
The latest issue of The Journal of Practical Ethics has just been published online, and it includes several fascinating essays (see the abstracts below). In this blog post, I’d like to draw attention to one of them in particular, because it seemed to me to be especially creative and because it was written by an undergraduate student! The essay – “How Should Vegans Live?” – is by Oxford student Xavier Cohen. I had the pleasure of meeting Xavier several months ago when he presented an earlier draft of his essay at a lively competition in Oxford: he and several others were finalists for the Oxford Uehiro Prize in Practical Ethics, for which I was honored to serve as one of the judges.
In a nutshell, Xavier argues that ethical vegans – that is, vegans who refrain from eating animal products specifically because they wish to reduce harm to animals – may actually be undermining their own aims. This is because, he argues, many vegans are so strict about the lifestyle they adopt (and often advocate) that they end up alienating people who might otherwise be willing to make less-drastic changes to their behavior that would promote animal welfare overall. Moreover, by focusing too narrowly on the issue of directly refraining from consuming animal products, vegans may fail to realize how other actions they take may be indirectly harming animals, perhaps even to a greater degree.
1 in 4 women: How the latest sexual assault statistics were turned into click bait by the New York Times
* Note: this article was originally published at the Huffington Post.
As someone who has worked on college campuses to educate men and women about sexual assault and consent, I have seen the barriers to raising awareness and changing attitudes. Chief among them, in my experience, is a sense of skepticism–especially among college-aged men–that sexual assault is even all that dire of a problem to begin with.
“1 in 4? 1 in 5? Come on, it can’t be that high. That’s just feminist propaganda!”
A lot of the statistics that get thrown around in this area (they seem to think) have more to do with politics and ideology than with careful, dispassionate science. So they often wave away the issue of sexual assault–and won’t engage on issues like affirmative consent.
In my view, these are the men we really need to reach.
A new statistic
So enter the headline from last week’s New York Times coverage of the latest college campus sexual assault survey:
But that’s not what the survey showed. And you don’t have to read all 288 pages of the published report to figure this out (although I did that today just to be sure). The executive summary is all you need.
Just out today is a podcast interview for Smart Drug Smarts between host Jesse Lawler and interviewee Brian D. Earp on “The Medicalization of Love” (title taken from a recent paper with Anders Sandberg and Julian Savulescu, available from the Cambridge Quarterly of Healthcare Ethics, here).
Below is the abstract and link to the interview:
What is love? A loaded question with the potential to lead us down multiple rabbit holes (and, if you grew up in the 90s, evoke memories of the Haddaway song). In episode #95, Jesse welcomes Brian D. Earp on board for a thought-provoking conversation about the possibilities and ethics of making biochemical tweaks to this most celebrated of human emotions. With a topic like “manipulating love,” the discussion moves between the realms of neuroscience, psychology and transhumanist philosophy.
Earp, B. D., Sandberg, A., & Savulescu, J. (2015). The medicalization of love. Cambridge Quarterly of Healthcare Ethics, Vol. 24, No. 3, 323–336.
On Thursday 4th June the Double St Cross Special Ethics Seminar took place. Presenting were Dr Joshua Shepherd and Dr Mimi Zou. Please see bellow for abstracts and links to the podcasts of the talks. Continue reading