Author: Fergus Peace, BPhil student, University of Oxford
Podcasts of Prof. Levy’s Leverhulme lectures are available here:
It’s only a little more than forty years ago that George Wallace won the contest for Governor of Alabama by running ads with slogans like “Wake up Alabama! Blacks vow to take over Alabama” and “Do you want the black bloc electing your governor?” That year, 1970, 50% of people surveyed in the American South said they would never – under any circumstances – vote for a black President. By 2012, that number was down by 8%, and it’s hard to deny that open, avowed racism has been in steep decline for most of the last forty years. But even as people’s overt commitment to racism declines, experiments still show that black candidates are less likely to be given job interviews than equally qualified white candidates; African-Americans are still disproportionately likely to be imprisoned, or shot by police.
So what’s going on? That is the motivating puzzle of Professor Neil Levy’s Leverhulme Lectures, and his answer centres on an increasingly well-known but still very disturbing psychological phenomenon: implicit bias. There are a range of tests which have uncovered evidence of implicit negative attitudes held – by a majority of white Americans, but a sizeable number of black Americans too – against black people. Harvard University’s ‘Project Implicit’ has a series of Implicit Association Tests (IATs); Keith Payne, among others, has developed tests of what he calls the Affect Misattribution Procedure (AMP). IATs ask us to sort faces and words according to their race and ‘valence’, and we find that task much easier when we have to associate black faces with negative words than we do otherwise. Tests of the AMP ask subjects to rate the pleasantness of an image which is entirely meaningless to them – a Chinese character, for people who don’t speak Chinese – and find that they rate it less pleasant if they’re shown an image of a black face immediately beforehand.
There’s no doubt these results are unsettling. (If you want to do an IAT online, as you should, you have to agree to receiving results you might disagree or be uncomfortable with before you proceed.) And they’re not just subconscious attitudes which are uncomfortable but insignificant; implicit bias as measured by these various tests is correlated with being less likely to vote for Barack Obama, and more likely to blame the black community for violence in protests against police brutality. Tests in virtual shooting ranges also reveal that it correlates with being more likely to shoot unarmed black men when given the task of shooting only those carrying weapons. Implicit biases certainly seem to cause, at least partly, racist actions and patterns of behaviour, like being quicker to shoot at unarmed black people and less likely to invite them for job interviews.
Professor Levy’s lectures grappled with two questions about these attitudes: first, do they make you a racist; and second, are you morally responsible for actions caused by your implicit biases? If you, like me, abhor racism and make that abhorrence at least some part of your political and social identity, but nonetheless come away with a “moderate automatic preference for European … compared to African” on the race IAT, then are you – protestations to the contrary – a racist? His answer to this question in the first lecture, based on the current state of conceptual investigation of what racism is and empirical evidence about the character of implicit biases, was a qualified no: they don’t clearly count as beliefs, or even as feelings, in a way that could let us confidently call people racist just because they possess them.
The second question is similarly complex. When interviewers prefer white applicants over equally qualified black ones, due to their implicit attitudes, are they responsible for the racist character of that action? Levy focused largely on the ‘control theory’ of moral responsibility, which says that you’re responsible for an action only if you exercise sufficient control over it. Levy’s answer to this question is a pretty clear no: implicit attitudes don’t have the right sort of attributes (in particular, reliable responsiveness to reasons and evidence) to count as giving you control over the actions they cause.
I find it very hard to disagree with the core of Professor Levy’s arguments on his two questions. The points I want to make in response come from a different direction, because after listening to the two lectures I’m not convinced that these are the important questions to be asking about implicit bias.
Science and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent concerns about sloppy research, small sample sizes, and challenges in replicating major findings—concerns I share and which I have written about at length — I still believe that the scientific method is the best available tool for getting at empirical truth. Or to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst tool, except for all the rest.
Scientists are people too
In other words, science is flawed. And scientists are people too. While it is true that most scientists — at least the ones I know and work with — are hell-bent on getting things right, they are not therefore immune from human foibles. If they want to keep their jobs, at least, they must contend with a perverse “publish or perish” incentive structure that tends to reward flashy findings and high-volume “productivity” over painstaking, reliable research. On top of that, they have reputations to defend, egos to protect, and grants to pursue. They get tired. They get overwhelmed. They don’t always check their references, or even read what they cite. They have cognitive and emotional limitations, not to mention biases, like everyone else.
At the same time, as the psychologist Gary Marcus has recently put it, “it is facile to dismiss science itself. The most careful scientists, and the best science journalists, realize that all science is provisional. There will always be things that we haven’t figured out yet, and even some that we get wrong.” But science is not just about conclusions, he argues, which are occasionally (or even frequently) incorrect. Instead, “It’s about a methodology for investigation, which includes, at its core, a relentless drive towards questioning that which came before.” You can both “love science,” he concludes, “and question it.”
I agree with Marcus. In fact, I agree with him so much that I would like to go a step further: if you love science, you had better question it, and question it well, so it can live up to its potential.
And it is with that in mind that I bring up the subject of bullshit.
The Uehiro Centre for Practical Ethics (University of Oxford) and the Centre for Applied Philosophy and Public Ethics (Charles Sturt University) hosted a conference on conscientious objection in medicine and the role of conscience in healthcare practitioners’ decision making; The Conscience And Conscientious Objection In Healthcare Conference. It was held at the Oxford Martin School on the 23rd and 24th of November, organised by Julian Savulescu (University of Oxford), Alberto Giubilini (Charles Sturt University) and Steve Clarke (Charles Sturt University)
For the full program please follow this link.
The conference was aimed at analyzing from a philosophical, ethical and legal perspective the meaning and the role of “conscience” in the healthcare profession. Conscientious objection by health professionals has become one of the most pressing problems in healthcare ethics. Health professionals are often required to perform activities that conflict with their own moral or religious beliefs (for example abortion). Their refusal can make it difficult for patients to have access to services they have a right to and, more in general, can create conflicts in the doctor-patient relationship. The widening of the medical options available today or in the near future is likely to sharpen these conflicts. Experts in bioethics, philosophy, law and medicine explored possible solutions.
The conference was supported by the Uehiro Centre for Practical Ethics and an Australian Research Council Discovery Grant (DP 150102068). We are grateful to the Oxford Martin School for providing the venue for the conference.
On the Oxford Uehiro Centre for Practical Ethics website you will find both video and audio files of various commentaries and talks from the conference.
*Note that this article was originally published at the Huffington Post.
In the New York Times yesterday, psychologist Lisa Feldman Barrett argues that “Psychology is Not in Crisis.” She is responding to the results of a large-scale initiative called the Reproducibility Project, published in Science magazine, which appeared to show that the findings from over 60 percent of a sample of 100 psychology studies did not hold up when independent labs attempted to replicate them.
She argues that “the failure to replicate is not a cause for alarm; in fact, it is a normal part of how science works.” To illustrate this point, she gives us the following scenario:
Suppose you have two well-designed, carefully run studies, A and B, that investigate the same phenomenon. They perform what appear to be identical experiments, and yet they reach opposite conclusions. Study A produces the predicted phenomenon, whereas Study B does not. We have a failure to replicate.
Does this mean that the phenomenon in question is necessarily illusory? Absolutely not. If the studies were well designed and executed, it is more likely that the phenomenon from Study A is true only under certain conditions. The scientist’s job now is to figure out what those conditions are, in order to form new and better hypotheses to test.
She’s making a pretty big assumption here, which is that the studies we’re interested in are “well-designed” and “carefully run.” But a major reason for the so-called “crisis” in psychology — and I’ll come back to the question of just what kind of crisis we’re really talking about (see my title) — is the fact that a very large number of not-well-designed, and not-carefully-run studies have been making it through peer review for decades.
Small sample sizes, sketchy statistical procedures, incomplete reporting of experiments, and so on, have been pretty convincingly shown to be widespread in the field of psychology (and in other fields as well), leading to the publication of a resource-wastingly large percentage of “false positives” (read: statistical noise that happens to look like a real result) in the literature.
Written by Christopher Chew
Treasurer, do you accept that housing in Sydney is unaffordable and the only way we’re going to make it affordable is if real house prices in real terms actually fall over the near term?
TREASURER JOE HOCKEY:
No. Look, if housing were unaffordable in Sydney, no one would be buying it…it’s expensive.…but, having said that…a lot of people would much rather have their homes go up in value…
You say that housing is affordable…what about for first home buyers…people that don’t have access to equity in other properties?
TREASURER JOE HOCKEY:
…the starting point for a first home buyer is to get a good job that pays good money… you can go to the bank and you can borrow money and that’s readily affordable…
Recent careless comments made by Australian Treasurer Joe Hockey during a radio interview (see above) have provoked a firestorm of media outrage and scorn, with accusations of being ‘out of touch’ and elitist. In all fairness, more has been made of these comments than is likely warranted – though the Treasurer’s enviable property portfolio, including an AUD$5.4 million primary residence, a history of previous embarrassing gaffes hasn’t helped.
Written By Johanna Ahola-Launonen
University of Helsinki
In bioethical discussion, it is often debated whether or not some studies espouse genetic determinism. A recent study by Tuomas Aivelo and Anna Uitto give important insight to the matter. They studied main genetics education textbooks used in Finnish upper secondary school curricula and compared the results to other similar studies from e.g. Swedish and English textbooks. The authors found that gene models used in the textbooks are based on old “Mendelian law”-based gene models not compatible with current knowledge on gene-gene-environment-interaction. The authors also identified several types of genetic determinism, that is, weak determinism and strong determinism, which both were present in the textbooks. The somewhat intuitive remark is that genetic education has to have a strong trickle-down effect on how people understand genes, and that we should be careful not to maintain these flawed conceptions. Furthermore, it would be useful to separate the discussion on genetic determinism into the terms “weak” and “strong”, of which the strong version is undoubtedly rarer while the weak is more prevalent.
Written by Constantin Vica
Postdoctoral Fellow, Romanian Academy Iasi Branch
Research Center in Applied Ethics, University of Bucharest
This post is not, as one might expect, about that part of ethics which is not concerned about practical issues, e.g. meta-ethics. Neither is it about moral philosophical endeavors which are incomprehensible, highly conceptual and without any adherence to real people’s lives. And, more than that, it is not about how impractical a philosophy/ethics diploma is for finding a job.
One month ago Peter Singer, the leading ethicist and philosopher, was ‘disinvited’ from a philosophy festival in Cologne. It wasn’t the first time such a thing happened and perhaps Peter Singer wasn’t too impressed by the incident. Despite all of these things, the fact has a not-so-nice implication: “you, the practical ethicist, are not welcome to our city!” Of course, Peter Singer is not the first philosopher ‘disinvited’ (horribile dictu) by an ‘honorable’ audience; the history of philosophy and free thinking has an extensive collection of undesirable individuals expelled, exiled, and even killed by furious or ignorant citizens and stubborn elites. But, one might wonder, what is different this time? Continue reading
Written by Prof. Antonio Diéguez
Universidad de Malaga
The public image of science is usually subjected to distortions tending to blur the nuances and to generate monolithic assessments. The mass media contribute to a large extent to the creation of disproportionate expectations in the next and spectacular benefits provided by scientific research, or on the contrary, to the creation of exaggerate concerns lacking in many occasions of a rational basis. This is the reason why any professional scientist with the required talent and vocation should currently assume the task of offering to the public clear and accessible information about the research underway in any field. In the present circumstances, the scientific divulgation cannot be a personal hobby of some scientists or an exclusive task of scientifically educated writers, but it must be a central aspect of scientific practice. Science needs a good public image for its survival –at least in the form it has had so far. If the scientists do not provide determinedly and abundantly the socially demanded information, then the citizens will look for it in less reliable sources (Internet has plenty of them), with the consequent proliferation of bad information. Information is like money, the counterfeit one finally circulates better than the good one. Continue reading
Written by Johann Ahola-Launonen
University of Helsinki
How should bioethical discussion be? The academic debate entails a tension between different parties, which often are difficult to compare. To mention some, for example, some draw from the tradition of liberal consequentialism and demand for rationalism and the avoidance of lofty moral arguments. Others descend from the teleological and communitarian tradition, emphasizing that the moral issues ought to be holistically confronted in their complexity, accepting that they cannot be analyzed in logical, reasonable fragments. Continue reading
The latest issue of the Journal of Medical Ethics is out, and in it, Professor Nigel Biggar—an Oxford theologian—argues that “religion” should have a place in secular medicine (click here for a link to the article).
Some people will feel a shiver go down their spines—and not only the non-religious. After all, different religions require different things, and sometimes they come to opposite conclusions. So whose religion, exactly, does Professor Biggar have in mind, and what kind of “place” is he trying to make a case for?