Skip to content

Oxford Uehiro Prize in Practical Ethics: Should We Take Moral Advice From Our Computers? written by Mahmoud Ghanem

Oxford Uehiro Prize in Practical Ethics: Should We Take Moral Advice From Our Computers? written by Mahmoud Ghanem

  • by

This essay received an Honourable Mention in the undergraduate category of the Oxford Uehiro Prize in Practical Ethics.

Written by University of Oxford student, Mahmoud Ghanem

The Case For Computer Assisted Ethics

In the interest of rigour, I will avoid use of the phrase “Artificial Intelligence”, though many of the techniques I will discuss, namely statistical inference and automated theorem proving underpin most of what is described as “AI” today.

Whether we believe that the goal of moral actions ought to be to form good habits, to maximise some quality in the world, to follow the example of certain role models, or to adhere to some set of rules or guiding principles, a good case for consulting a well designed computer program in the process of making our moral decisions can be made. After all, the process of carrying out each of the above successfully at least requires:

(1) Access to relevant and accurate data, and

(2) The ability to draw accurate conclusions by analysing such data.

Both of which are things that computers are very good at.Read More »Oxford Uehiro Prize in Practical Ethics: Should We Take Moral Advice From Our Computers? written by Mahmoud Ghanem

Announcement: 2nd Annual Oxford Uehiro Prize in Practical Ethics: Finalists and Honourable Mentions

  • by

The 2nd Annual Oxford Uehiro Prize in Practical Ethics was announced on this blog on the 11th November 2015.  By the 25th January 2016 a large number of high quality essays had been submitted and the judges had a difficult time narrowing the field down to 5 finalists and 6 Honourable Mentions, which are now listed here. We are very pleased to announce that over the next few weeks we will be publishing the essays listed below in our Oxford Uehiro Prize in Practical Ethics series.Read More »Announcement: 2nd Annual Oxford Uehiro Prize in Practical Ethics: Finalists and Honourable Mentions

Response to Fergus Peace

  • by

Author: Neil Levy, Leverhulme Visiting Professor

Podcasts of Prof Levy’s Leverhulme Lectures can be found here:

http://media.philosophy.ox.ac.uk/uehiro/HT16_LL_LEVY1.mp3

and http://media.philosophy.ox.ac.uk/uehiro/HT16_LL_LEVY2.mp3

Fergus Peace’s responses to my lecturers are interesting and challenging. As he notes, in my lectures I focused on two questions:

(1) are we (those of us with egalitarian explicit beliefs but conflicting implicit attitudes) racist?

(2) When those attitudes cause actions which seem appropriately to be characterized as racist (sexist, homophobic…), are we morally responsible for these actions (more precisely, for the fact that they can be classified in these morally laden terms)?

He suggests that these questions simply are not important ones to ask. Getting clear on how we ought to respond to implicit biases (what steps we ought to take to mitigate their effects or to eliminate them) matters, but asking whether a certain label attaches to us does not. Nor does it matter whether we are morally responsible for the actions these attitudes cause.

The first challenge seems to me to be a good one. I will discuss that challenge after I have discussed the question concerning our moral responsibility. This challenge seems very much weaker.

Read More »Response to Fergus Peace

Why it matters if people are racist: A Response to Neil Levy’s Leverhulme Lectures

  • by

Author: Fergus Peace, BPhil student, University of Oxford

Podcasts of Prof. Levy’s Leverhulme lectures are available here:

http://media.philosophy.ox.ac.uk/uehiro/HT16_LL_LEVY1.mp3

and http://media.philosophy.ox.ac.uk/uehiro/HT16_LL_LEVY2.mp3

It’s only a little more than forty years ago that George Wallace won the contest for Governor of Alabama by running ads with slogans like “Wake up Alabama! Blacks vow to take over Alabama” and “Do you want the black bloc electing your governor?” That year, 1970, 50% of people surveyed in the American South said they would never – under any circumstances – vote for a black President. By 2012, that number was down by 8%, and it’s hard to deny that open, avowed racism has been in steep decline for most of the last forty years. But even as people’s overt commitment to racism declines, experiments still show that black candidates are less likely to be given job interviews than equally qualified white candidates; African-Americans are still disproportionately likely to be imprisoned, or shot by police.

So what’s going on? That is the motivating puzzle of Professor Neil Levy’s Leverhulme Lectures, and his answer centres on an increasingly well-known but still very disturbing psychological phenomenon: implicit bias. There are a range of tests which have uncovered evidence of implicit negative attitudes held – by a majority of white Americans, but a sizeable number of black Americans too – against black people. Harvard University’s ‘Project Implicit’ has a series of Implicit Association Tests (IATs); Keith Payne, among others, has developed tests of what he calls the Affect Misattribution Procedure (AMP). IATs ask us to sort faces and words according to their race and ‘valence’, and we find that task much easier when we have to associate black faces with negative words than we do otherwise. Tests of the AMP ask subjects to rate the pleasantness of an image which is entirely meaningless to them – a Chinese character, for people who don’t speak Chinese – and find that they rate it less pleasant if they’re shown an image of a black face immediately beforehand.

There’s no doubt these results are unsettling. (If you want to do an IAT online, as you should, you have to agree to receiving results you might disagree or be uncomfortable with before you proceed.) And they’re not just subconscious attitudes which are uncomfortable but insignificant; implicit bias as measured by these various tests is correlated with being less likely to vote for Barack Obama, and more likely to blame the black community for violence in protests against police brutality. Tests in virtual shooting ranges also reveal that it correlates with being more likely to shoot unarmed black men when given the task of shooting only those carrying weapons. Implicit biases certainly seem to cause, at least partly, racist actions and patterns of behaviour, like being quicker to shoot at unarmed black people and less likely to invite them for job interviews.

Professor Levy’s lectures grappled with two questions about these attitudes: first, do they make you a racist; and second, are you morally responsible for actions caused by your implicit biases? If you, like me, abhor racism and make that abhorrence at least some part of your political and social identity, but nonetheless come away with a “moderate automatic preference for European … compared to African” on the race IAT, then are you – protestations to the contrary – a racist? His answer to this question in the first lecture, based on the current state of conceptual investigation of what racism is and empirical evidence about the character of implicit biases, was a qualified no: they don’t clearly count as beliefs, or even as feelings, in a way that could let us confidently call people racist just because they possess them.

The second question is similarly complex. When interviewers prefer white applicants over equally qualified black ones, due to their implicit attitudes, are they responsible for the racist character of that action? Levy focused largely on the ‘control theory’ of moral responsibility, which says that you’re responsible for an action only if you exercise sufficient control over it. Levy’s answer to this question is a pretty clear no: implicit attitudes don’t have the right sort of attributes (in particular, reliable responsiveness to reasons and evidence) to count as giving you control over the actions they cause.

I find it very hard to disagree with the core of Professor Levy’s arguments on his two questions. The points I want to make in response come from a different direction, because after listening to the two lectures I’m not convinced that these are the important questions to be asking about implicit bias.

Read More »Why it matters if people are racist: A Response to Neil Levy’s Leverhulme Lectures

Video Series: Walter Sinnott-Armstrong on Moral Artificial Intelligence

  • by

Professor Walter Sinnott-Armstrong (Duke University and Oxford Martin Visiting Fellow) plans to develop a computer system (and a phone app) that will help us gain knowledge about human moral judgment and that will make moral judgment better. But will this moral AI make us morally lazy? Will it be abused? Could this moral AI take… Read More »Video Series: Walter Sinnott-Armstrong on Moral Artificial Intelligence

Does the desire to punish have any place in modern justice?

  • by

Professor Neil Levy, visiting Leverhulme Lecturer, University of Oxford, has recently published a provocative essay at Aeon online magazine: Human beings are a punitive species. Perhaps because we are social animals, and require the cooperation of others to achieve our goals, we are strongly disposed to punish those who take advantage of us. Those who… Read More »Does the desire to punish have any place in modern justice?

The Allure of Donald Trump

  • by

The primary season is now well underway, and the Trump bandwagon continues to gather pace. Like most observers, I thought it would run out of steam well before this stage. Trump delights in the kinds of vicious attacks and stupidities that would derail any other candidate. His lack of shame and indifference to truth give him a kind of imperviousness to criticism. His candidacy no longer seems funny: it now arouses more horror than humor for many observers. Given that Trump is so awful – so bereft of genuine ideas, of intelligence, and obviously of decency – what explains his poll numbers? Read More »The Allure of Donald Trump

Using birth control to combat Zika virus could affect future generations

  • by

Written by Simon Beard
Research Fellow in Philosophy, Future of Humanity Institute, University of Oxford

This is a cross post of an article which originally appeared in The Conversation.

In a recent article, Oxford University’s director of medical ethics, Dominic Wilkinson, argued that birth control was a key way of tackling the Zika virus’s apparently devastating effects on unborn children – a strategy that comes with the extra benefit of meeting the need for reproductive health across much of the affected areas.

However, although this approach might be one solution to a medical issue, it doesn’t consider the demographic implications of delaying pregnancy on such an unprecedented scale – some of which could have a significant impact on people and societies.Read More »Using birth control to combat Zika virus could affect future generations

A jobless world—dystopia or utopia?

There is no telling what machines might be able to do in the not very distant future. It is humbling to realise how wrong we have been in the past at predicting the limits of machine capabilities.

We once thought that it would never be possible for a computer to beat a world champion in chess, a game that was thought to be the expression of the quintessence of human intelligence. We were proven wrong in 1997, when Deep Blue beat Garry Kasparov. Once we came to terms with the idea that computers might be able to beat us at any intellectual game (including Jeopardy!, and more recently, Go), we thought that surely they would be unable to engage in activities where we typically need to use common sense and coordination to physically respond to disordered conditions, as when we drive. Driverless cars are now a reality, with Google trying to commercialise them by 2020.

Machines assist doctors in exploring treatment options, they score tests, plant and pick crops, trade stocks, store and retrieve our documents, process information, and play a crucial role in the manufacturing of almost every product we buy.

As machines become more capable, there are more incentives to replace human workers with computers and robots. Computers do not ask for a decent wage, they do not need rest or sleep, they do not need health benefits, they do not complain about how their superiors treat them, and they do not steal or laze away.

Read More »A jobless world—dystopia or utopia?

What is the relationship between science and morality?

Quick announcement: A podcast interview between Brian D. Earp (a.k.a. myself) and J. J. Chipchase for Naturalistic Philosophy has just been released: we talk about the relationship between science and morality, the is/ought distinction, free will, the replication crisis in science and medicine, problems with peer review, bullshit in academia, and Sam Harris’s The Moral Landscape, among other things. Check it… Read More »What is the relationship between science and morality?