Skip to content

admin

Why it matters if people are racist: A Response to Neil Levy’s Leverhulme Lectures

  • by

Author: Fergus Peace, BPhil student, University of Oxford

Podcasts of Prof. Levy’s Leverhulme lectures are available here:

http://media.philosophy.ox.ac.uk/uehiro/HT16_LL_LEVY1.mp3

and http://media.philosophy.ox.ac.uk/uehiro/HT16_LL_LEVY2.mp3

It’s only a little more than forty years ago that George Wallace won the contest for Governor of Alabama by running ads with slogans like “Wake up Alabama! Blacks vow to take over Alabama” and “Do you want the black bloc electing your governor?” That year, 1970, 50% of people surveyed in the American South said they would never – under any circumstances – vote for a black President. By 2012, that number was down by 8%, and it’s hard to deny that open, avowed racism has been in steep decline for most of the last forty years. But even as people’s overt commitment to racism declines, experiments still show that black candidates are less likely to be given job interviews than equally qualified white candidates; African-Americans are still disproportionately likely to be imprisoned, or shot by police.

So what’s going on? That is the motivating puzzle of Professor Neil Levy’s Leverhulme Lectures, and his answer centres on an increasingly well-known but still very disturbing psychological phenomenon: implicit bias. There are a range of tests which have uncovered evidence of implicit negative attitudes held – by a majority of white Americans, but a sizeable number of black Americans too – against black people. Harvard University’s ‘Project Implicit’ has a series of Implicit Association Tests (IATs); Keith Payne, among others, has developed tests of what he calls the Affect Misattribution Procedure (AMP). IATs ask us to sort faces and words according to their race and ‘valence’, and we find that task much easier when we have to associate black faces with negative words than we do otherwise. Tests of the AMP ask subjects to rate the pleasantness of an image which is entirely meaningless to them – a Chinese character, for people who don’t speak Chinese – and find that they rate it less pleasant if they’re shown an image of a black face immediately beforehand.

There’s no doubt these results are unsettling. (If you want to do an IAT online, as you should, you have to agree to receiving results you might disagree or be uncomfortable with before you proceed.) And they’re not just subconscious attitudes which are uncomfortable but insignificant; implicit bias as measured by these various tests is correlated with being less likely to vote for Barack Obama, and more likely to blame the black community for violence in protests against police brutality. Tests in virtual shooting ranges also reveal that it correlates with being more likely to shoot unarmed black men when given the task of shooting only those carrying weapons. Implicit biases certainly seem to cause, at least partly, racist actions and patterns of behaviour, like being quicker to shoot at unarmed black people and less likely to invite them for job interviews.

Professor Levy’s lectures grappled with two questions about these attitudes: first, do they make you a racist; and second, are you morally responsible for actions caused by your implicit biases? If you, like me, abhor racism and make that abhorrence at least some part of your political and social identity, but nonetheless come away with a “moderate automatic preference for European … compared to African” on the race IAT, then are you – protestations to the contrary – a racist? His answer to this question in the first lecture, based on the current state of conceptual investigation of what racism is and empirical evidence about the character of implicit biases, was a qualified no: they don’t clearly count as beliefs, or even as feelings, in a way that could let us confidently call people racist just because they possess them.

The second question is similarly complex. When interviewers prefer white applicants over equally qualified black ones, due to their implicit attitudes, are they responsible for the racist character of that action? Levy focused largely on the ‘control theory’ of moral responsibility, which says that you’re responsible for an action only if you exercise sufficient control over it. Levy’s answer to this question is a pretty clear no: implicit attitudes don’t have the right sort of attributes (in particular, reliable responsiveness to reasons and evidence) to count as giving you control over the actions they cause.

I find it very hard to disagree with the core of Professor Levy’s arguments on his two questions. The points I want to make in response come from a different direction, because after listening to the two lectures I’m not convinced that these are the important questions to be asking about implicit bias.

Read More »Why it matters if people are racist: A Response to Neil Levy’s Leverhulme Lectures

Video Series: Walter Sinnott-Armstrong on Moral Artificial Intelligence

  • by

Professor Walter Sinnott-Armstrong (Duke University and Oxford Martin Visiting Fellow) plans to develop a computer system (and a phone app) that will help us gain knowledge about human moral judgment and that will make moral judgment better. But will this moral AI make us morally lazy? Will it be abused? Could this moral AI take… Read More »Video Series: Walter Sinnott-Armstrong on Moral Artificial Intelligence

Does the desire to punish have any place in modern justice?

  • by

Professor Neil Levy, visiting Leverhulme Lecturer, University of Oxford, has recently published a provocative essay at Aeon online magazine: Human beings are a punitive species. Perhaps because we are social animals, and require the cooperation of others to achieve our goals, we are strongly disposed to punish those who take advantage of us. Those who… Read More »Does the desire to punish have any place in modern justice?

Using birth control to combat Zika virus could affect future generations

  • by

Written by Simon Beard
Research Fellow in Philosophy, Future of Humanity Institute, University of Oxford

This is a cross post of an article which originally appeared in The Conversation.

In a recent article, Oxford University’s director of medical ethics, Dominic Wilkinson, argued that birth control was a key way of tackling the Zika virus’s apparently devastating effects on unborn children – a strategy that comes with the extra benefit of meeting the need for reproductive health across much of the affected areas.

However, although this approach might be one solution to a medical issue, it doesn’t consider the demographic implications of delaying pregnancy on such an unprecedented scale – some of which could have a significant impact on people and societies.Read More »Using birth control to combat Zika virus could affect future generations

Guest Post: Does Humanity Want Computers Making Moral Decisions?

  • by

Albert Barqué-Duran
Department of Psychology
CITY UNIVERSITY LONDON

A runaway trolley is approaching a fork in the tracks. If the trolley is allowed to run on its current track, a work crew of five will be killed. If the driver steers the train down the other branch, a lone worker will be killed. If you were driving this trolley what would you do? What would a computer or robot driving this trolley do? Autonomous systems are coming whether people like it or not. Will they be ethical? Will they be good? And what do we mean by “good”?

Many agree that artificial moral agents are necessary and inevitable. Others say that the idea of artificial moral agents intensifies their distress with cutting edge technology. There is something paradoxical in the idea that one could relieve the anxiety created by sophisticated technology with even more sophisticated technology. A tension exists between the fascination with technology and the anxiety it provokes. This anxiety could be explained by (1) all the usual futurist fears about technology on a trajectory beyond human control and (2) worries about what this technology might reveal about human beings themselves. The question is not what will technology be like in the future, but rather, what will we be like, what are we becoming as we forge increasingly intimate relationships with our machines. What will be the human consequences of attempting to mechanize moral decision-making?

Read More »Guest Post: Does Humanity Want Computers Making Moral Decisions?

2nd Annual Oxford Uehiro Prize in Practical Ethics Finals Announcement

  • by

The 2nd Annual Oxford Uehiro Prize in Practical Ethics Final Presentation and Reception

HT16 Week 7, Wednesday 2nd March, 4.00 – 5.50 pm.

The Presentation will be held in Seminar Room 1, Oxford Martin School (corner of Catte St and Broad St), followed by a drinks reception in Seminar room 2 until 6.45 pm.

We are pleased to announce the five finalists for the Oxford Uehiro Prize in Practical Ethics and to invite you to attend the final where they will present their entries. Two finalists have been selected from the undergraduate category and three from the graduate, to present their ideas to an audience and respond to a short Q&A as the final round in the competition.Read More »2nd Annual Oxford Uehiro Prize in Practical Ethics Finals Announcement

Five ways to become a really effective altruist

  • by

Written by Professor Julian Savulescu and Professor Walter Sinnott-Armstrong

This is a cross-post of an article which was originally published in The Conversation

Effective altruism is a philosophy and social movement which aims not only to increase charitable donations of time and money (and indeed more broadly to encourage leading a lifestyle which does good in the world), but also encourage the most effective use of these resources, usually by looking for measurable impacts such as lives saved per dollar.

For an effective altruist, the core question is: “Of all the possible ways to make a difference, how can I make the greatest difference?” It might be argued, for example, that charity work isn’t the best use of time; a talented financier may be better off working for a bank, and use their earnings to pay for others to work for charities instead.Read More »Five ways to become a really effective altruist

Video Series: Professor Walter Sinnott-Armstrong on Conscientious Objection in Healthcare

Professor Walter Sinnott-Armstrong (Duke University and Oxford Martin School Visiting Fellow) proposes to use the market forces to solve problems of conscientious objection in healthcare in the US. (He also has a suggestion for how to deal with conscientious objection in a public healthcare system + gives a controversial answer to my question regarding discriminatory… Read More »Video Series: Professor Walter Sinnott-Armstrong on Conscientious Objection in Healthcare