Written by Richard Ngo , an undergraduate student in Computer Science and Philosophy at the University of Oxford.
Neil Levy’s Leverhulme Lectures start from the admirable position of integrating psychological results and philosophical arguments, with the goal of answering two questions:
(1) are we (those of us with egalitarian explicit beliefs but conflicting implicit attitudes) racist?
(2) when those implicit attitudes cause actions which seem appropriately to be characterised as racist (sexist, homophobic…), are we morally responsible for these actions?
Fergus Peace has already written an excellent essay evaluating the extent to which these questions are important. I won’t rehash his discussion of what Implicit Association Tests are, and what conclusions can be drawn from them, but I would like to extend the exchange which has taken place so far. I agree with him that the first question is to a large extent a matter of terminology. Professor Levy has argued in response that, given further research on what it means to be racist, we might be able to use the term to make reliable generalisations.
I have two objections to this. The first is that racism is a particularly difficult and loaded term to be working with in a philosophical sense, because of the way it is used throughout society, particularly in politically polarised contexts. It seems to be inviting an excess of intuitive disagreement, as well as popular misunderstanding, to try and co-opt the term ‘racist’ into philosophical discussions. Now, it might still be worth doing further research on what we should mean by ‘racist’, if we still thought that there was some distinct kind of person or behaviour which the term captured. But it seems that the type of research Professor Levy cites – implicit association tests in particular – are strong evidence that the common conception of racism is in fact a rather ad-hoc and possibly incoherent notion. Specifically, it tries to capture people who might vary widely along at least four different dimensions – beliefs, deliberate choices, contribution to outcomes, and implicit associations – in a way which only confuses the relationship between these elements. Even if philosophy can come to clear answers about what is and isn’t racist, they will likely clash heavily with the usage encountered throughout the rest of society, making the philosophical study of racism even more impractical and abstract. Thus rather than attempting to fit implicit associations into the traditional binary of racist or not, I think the psychological evidence might best be used to prompt a more nuanced discussion on whether ‘racist’ is a useful label at all.
The second question is more important, and I agree with Professor Levy’s argument that moral responsibility matters at the very least in “how we respond to ourselves and others”. Professor Levy gives two broad criteria which define when one might be found “directly responsible” for an action: the level of control (which must be broad, systematic and continuous) you have, and the correspondence of the action with your personal identity. He argues that by neither condition do we have direct responsibility for actions caused by subconscious biases, and makes a number of interesting arguments towards this. On control, in particular, I think there are several points leading off from his lecture which are well worth discussion – for example, must the relevant form of control be exerted internally, or can we claim to control our implicit biases simply by limiting the environments in which they can adversely affect our behaviour? And if we know which way we’ll be biased only on a statistical basis, at what point is this enough to qualify as “awareness” of a responsibility? Professor Levy’s contributions towards these types of questions are a worthwhile challenge to some of the less nuanced moral discussion which currently surrounds implicit association tests.
However, I think this point stops short of where it needed to go – as particularly highlighted by one question asked after the second lecture. When asked whether drunk drivers had direct responsibility for harm they cause, Professor Levy said no, and added that the indirect responsibility of people like drunk drivers is not a lesser level of responsibility than direct responsibility. At this point, it seems there’s been a sleight of hand: the target of this section of the lecture, the idea of “direct responsibility”, has become neither the most relevant nor necessarily the most morally significant type of responsibility associated with the phenomenon under discussion. It seems highly plausible that, if there is no direct responsibility for actions influenced by implicit biases, there is at least indirect responsibility for not having countered such biases in the process of making the choice. Perhaps I suffer from a lack of familiarity with the subject, but I think that given Professor Levy’s unusually lenient view on where we should assign direct responsibility, it behooves him to explain to a greater extent what conclusions this lack of direct responsibility leads us to, and how the direct-indirect responsibility distinction should actually affect our real-life moral judgements. After all, as Professor Levy acknowledges, in many cases there are ways to avert predictable implicit biases – blind auditions, removal of names on CVs, and so on – which seem, on first glance, to be roughly equivalent to taking precautionary measures before driving. I would be interested to hear arguments for why we should think of those affected by implicit biases in a different way than those who a) are directly responsible for discrimination on a similar level, or b) have even clearer ways that they could have avoided indirect responsibility, such as avoiding drinking before driving. Without that, the lectures remain an interesting analysis of subconscious racism, but one which is essentially detached from moral consequences.
It would certainly be ‘sleight of hand’ to start from the question of whether “those of us with egalitarian explicit beliefs ” are racist, and then generalise to the wider population. Most people do not have “egalitarian explicit beliefs” at all, in fact they are committed to structural inequalities which derive from their values, and their sense of belonging to one of more specific communities. Immigration is an obvious example: all nation-states discriminate in favour of their own citizens, and most of those citizens never give that a thought, let along see it as inequality or racism.
So the scenario presented here – a decision-maker with with egalitarian explicit beliefs but implicit biases – is unlikely in the real world. What’s more, it ignores real-world interests of both parties, and assumes a shared commitment to some quasi-neutral goal. The point is, why consider the moral responsibility of a decision-maker in a very small number of cases, when there are so many real-world cases to consider? That’s where we have to go to look for racism, inequalities, discrimination, and so on – the real world, and not some hypothetical world inhabited by fictive test persons, with researcher-selected characteristics.
My initial attempt at a response seems to have got lost. I will try to reconstruct it quickly.
Direct moral responsibility is certainly not the only thing that matters but – far from being “essentially detached from moral consequences” – it is the main thing that matters. Here are two reasons.
First, all moral responsibility is direct or traces back to direct. No one is ever indirectly responsible for something unless there is some previous act or omission for which they are directly moral responsible. So, for instance, someone can be indirectly morally responsible for their racist action in virtue of their failure to take steps such that, had they taken them, their implicit bias would have been reduced or circumvented, only if they are directly morally responsible for failing to take such steps. But it is very likely that this failure is itself one that was influenced by implicit attitudes. So the question concerning indirect moral responsibility can be answered only by probing direct moral responsibility as well.
Second, though there are indeed steps we can take to reduce or circumvent implicit biases, we will often lack the opportunity or the awareness that we ought to take such steps (the lack of awareness may sometimes itself be explained by our implicit biases). We can’t predict with sufficiently high probability when we are going to face choices in which our biases are relevant, so we can’t always circumvent them. And those methods that have been shown to mitigate implicit biases do not have persisting effects (see Lai, et al. “Reducing Implicit Racial Preferences: II. Intervention Effectiveness Across Time”; available at SSRN). So even if we are maximally conscientious and well informed agents, we can expect to find ourselves in situations in which our implicit biases will influence our actions, and we are not indirectly responsible for this fact.
Comments are closed.