Oxford Uehiro Prize in Practical Ethics: Why, If At All, Is It Unethical For Universities To Prioritise Applicants Related To Their Alumni?

This essay was the runner up in the undergraduate category of the 7th Annual Oxford Uehiro Prize in Practical Ethics.

Written by University of Oxford student Tanae Rao


Most notably in the United States, some prestigious universities[1] consider whether or not a student is closely related to one or more alumni when evaluating her application. In an increasingly competitive university admissions landscape, having legacy status increases an applicant’s probability of being admitted to such a great extent that over a third of Harvard’s undergraduate class of 2022 is composed of legacy students.[2] This has led the New York Times Editorial Board to describe the practice as “anti-meritocratic” and “an engine of inequity”.[3]

Considering the alma mater of a student’s relatives when evaluating their university application seems to be wrong, or unfair, in some way. But what is the central aspect of the legacy admissions policy justifying this reaction? I consider three possible answers to this question. Firstly, I reject the academic qualification view, whereby universities should only consider if applicants will be able to meet academic requirements when making admissions decisions. This view does not reflect the actual state of university admissions today, where the number of qualified applicants often far exceeds the number of available seats. I then reject the popular view whereby universities should minimise their consideration of factors outside of the applicant’s control. Though this criterion appears to meet many of our intuitions regarding university admissions, I argue that it is too restrictive, preventing reasonable factors from being considered by universities. Finally, I propose a consequentialist view, whereby admissions decisions should be based on their expected consequences to admitted students and society as a whole. This view—I contend—is a plausible explanation of why legacy admissions should be discontinued, contingent on some evaluative questions. Continue reading

Arbitrariness as an Ethical Criticism

Written by Ben Davies

We recently saw a legal challenge to the current UK law that compels fertility clinics to destroy frozen eggs after a decade. According to campaigners, the ten-year limit may have had a rationale when it was instituted, but advances in freezing technology have rendered the limit “arbitrary”. Appeals to arbitrariness often form the basis of moral and political criticisms of policy. Still, we need to be careful in relying on appeals to arbitrariness; it is not clear that arbitrariness is always a moral ‘deal-breaker’.

On the face of it, it seems clear why arbitrary policies are ethically unacceptable. To be arbitrary is to lack basis in good reasons. An appeal against arbitrariness is an appeal to consistency, to the principle that like cases should be treated alike. Arbitrariness may therefore seem to cut against the very root of fairness.

Continue reading

Cross Post: Biased Algorithms: Here’s a More Radical Approach to Creating Fairness

Written by Dr Tom Douglas

File 20190116 163283 1s61b5v.jpg?ixlib=rb 1.1

Our lives are increasingly affected by algorithms. People may be denied loans, jobs, insurance policies, or even parole on the basis of risk scores that they produce.

Yet algorithms are notoriously prone to biases. For example, algorithms used to assess the risk of criminal recidivism often have higher error rates in minority ethic groups. As ProPublica found, the COMPAS algorithm – widely used to predict re-offending in the US criminal justice system – had a higher false positive rate in black than in white people; black people were more likely to be wrongly predicted to re-offend.

Corrupt code.
Vintage Tone/Shutterstock

Continue reading

Can We Trust Research in Science and Medicine?

By Brian D. Earp  (@briandavidearp)

Readers of the Practical Ethics Blog might be interested in this series of short videos in which I discuss some of the major ongoing problems with research ethics and publication integrity in science and medicine. How much of the published literature is trustworthy? Why is peer review such a poor quality control mechanism? How can we judge whether someone is really an expert in a scientific area? What happens when empirical research gets polarized? Most of these are short – just a few minutes. Links below:

Why most published research probably is false

The politicization of science and the problem of expertise

Science’s publication bias problem – why negative results are important

Getting beyond accusations of being either “pro-science” or “anti-science”

Are we all scientific experts now? When to be skeptical about scientific claims, and when to defer to experts

Predatory open access publishers and why peer review is broken

The future of scientific peer review

Sloppy science going on at the CDC and WHO

Dogmas in science – how do they form?

Please note: this post will be cross-published with the Journal of Medical Ethics Blog.

Cross Post: Liberal or conservative? Most of our beliefs shift around

Written by Prof Neil Levy,

Senior Research Fellow, Uehiro Centre for Practical Ethics, University of Oxford

This article was originally published on The Conversation

What? Okay, that sounds good. Justin Lane/EPA

One common reaction to the election of Donald Trump (and perhaps to a lesser extent, the Brexit vote) among liberals like me is an expression of dismay that some of our fellow citizens are more racist and more sexist than we had dreamed. It seems many were prepared, if not to support openly racist comments and sexist actions, then at least to overlook them. It looks as though battles we thought we had won, having to do with a recognition of a basic kind of equality, need to be fought all over again. Many have concluded that they were never won at all; people were just waiting for a favourable climate to express the racism and sexism they held hidden. Continue reading

Cross Post: What do sugar and climate change have in common? Misplaced scepticism of the science

Written by Professor Neil Levy, Senior Research Fellow, Uehiro Centre for Practical Ethics, University of Oxford

This article was originally published on The Conversation

Erosion of the case against sugar. Shutterstock

Why do we think that climate sceptics are irrational? A major reason is that almost none of them have any genuine expertise in climate science (most have no scientific expertise at all), yet they’re confident that they know better than the scientists. Science is hard. Seeing patterns in noisy data requires statistical expertise, for instance. Climate data is very noisy: we shouldn’t rely on common sense to analyse it. We are instead forced to use the assessment of experts. Continue reading

Response to Fergus Peace

Author: Neil Levy, Leverhulme Visiting Professor

Podcasts of Prof Levy’s Leverhulme Lectures can be found here:


Fergus Peace’s responses to my lecturers are interesting and challenging. As he notes, in my lectures I focused on two questions:

(1) are we (those of us with egalitarian explicit beliefs but conflicting implicit attitudes) racist?

(2) When those attitudes cause actions which seem appropriately to be characterized as racist (sexist, homophobic…), are we morally responsible for these actions (more precisely, for the fact that they can be classified in these morally laden terms)?

He suggests that these questions simply are not important ones to ask. Getting clear on how we ought to respond to implicit biases (what steps we ought to take to mitigate their effects or to eliminate them) matters, but asking whether a certain label attaches to us does not. Nor does it matter whether we are morally responsible for the actions these attitudes cause.

The first challenge seems to me to be a good one. I will discuss that challenge after I have discussed the question concerning our moral responsibility. This challenge seems very much weaker.

Continue reading

Why it matters if people are racist: A Response to Neil Levy’s Leverhulme Lectures

Author: Fergus Peace, BPhil student, University of Oxford

Podcasts of Prof. Levy’s Leverhulme lectures are available here:


It’s only a little more than forty years ago that George Wallace won the contest for Governor of Alabama by running ads with slogans like “Wake up Alabama! Blacks vow to take over Alabama” and “Do you want the black bloc electing your governor?” That year, 1970, 50% of people surveyed in the American South said they would never – under any circumstances – vote for a black President. By 2012, that number was down by 8%, and it’s hard to deny that open, avowed racism has been in steep decline for most of the last forty years. But even as people’s overt commitment to racism declines, experiments still show that black candidates are less likely to be given job interviews than equally qualified white candidates; African-Americans are still disproportionately likely to be imprisoned, or shot by police.

So what’s going on? That is the motivating puzzle of Professor Neil Levy’s Leverhulme Lectures, and his answer centres on an increasingly well-known but still very disturbing psychological phenomenon: implicit bias. There are a range of tests which have uncovered evidence of implicit negative attitudes held – by a majority of white Americans, but a sizeable number of black Americans too – against black people. Harvard University’s ‘Project Implicit’ has a series of Implicit Association Tests (IATs); Keith Payne, among others, has developed tests of what he calls the Affect Misattribution Procedure (AMP). IATs ask us to sort faces and words according to their race and ‘valence’, and we find that task much easier when we have to associate black faces with negative words than we do otherwise. Tests of the AMP ask subjects to rate the pleasantness of an image which is entirely meaningless to them – a Chinese character, for people who don’t speak Chinese – and find that they rate it less pleasant if they’re shown an image of a black face immediately beforehand.

There’s no doubt these results are unsettling. (If you want to do an IAT online, as you should, you have to agree to receiving results you might disagree or be uncomfortable with before you proceed.) And they’re not just subconscious attitudes which are uncomfortable but insignificant; implicit bias as measured by these various tests is correlated with being less likely to vote for Barack Obama, and more likely to blame the black community for violence in protests against police brutality. Tests in virtual shooting ranges also reveal that it correlates with being more likely to shoot unarmed black men when given the task of shooting only those carrying weapons. Implicit biases certainly seem to cause, at least partly, racist actions and patterns of behaviour, like being quicker to shoot at unarmed black people and less likely to invite them for job interviews.

Professor Levy’s lectures grappled with two questions about these attitudes: first, do they make you a racist; and second, are you morally responsible for actions caused by your implicit biases? If you, like me, abhor racism and make that abhorrence at least some part of your political and social identity, but nonetheless come away with a “moderate automatic preference for European … compared to African” on the race IAT, then are you – protestations to the contrary – a racist? His answer to this question in the first lecture, based on the current state of conceptual investigation of what racism is and empirical evidence about the character of implicit biases, was a qualified no: they don’t clearly count as beliefs, or even as feelings, in a way that could let us confidently call people racist just because they possess them.

The second question is similarly complex. When interviewers prefer white applicants over equally qualified black ones, due to their implicit attitudes, are they responsible for the racist character of that action? Levy focused largely on the ‘control theory’ of moral responsibility, which says that you’re responsible for an action only if you exercise sufficient control over it. Levy’s answer to this question is a pretty clear no: implicit attitudes don’t have the right sort of attributes (in particular, reliable responsiveness to reasons and evidence) to count as giving you control over the actions they cause.

I find it very hard to disagree with the core of Professor Levy’s arguments on his two questions. The points I want to make in response come from a different direction, because after listening to the two lectures I’m not convinced that these are the important questions to be asking about implicit bias.

Continue reading

Cultural bias and the evaluation of medical evidence: An update on the AAP

By Brian D. Earp

Follow Brian on Twitter by clicking here.

Cultural bias and the evaluation of medical evidence: An update on the AAP

Since my article on the American Academy of Pediatrics’ recent change in policy regarding infant male circumcision was posted back in August of 2012, some interesting developments have come about. Two major critiques of the AAP documents were published in leading international journals, one in the Journal of Medical Ethics, and a second in the AAP’s very own PediatricsIn the second of these, 38 distinguished pediatricians, pediatric surgeons, urologists, medical ethicists, and heads of hospital boards and children’s health societies throughout Europe and Canada argued that there is: “Cultural Bias in the AAP’s 2012 Technical Report and Policy Statement on Male Circumcision.”

The AAP took the time to respond to this possibility in a formal reply, also published in Pediatrics earlier this year. Rather than thoughtfully addressing the specific charge of cultural bias, however, the AAP elected to boomerang the criticism, implying that their critics were themselves biased, only against circumcision. To address this interesting allegation, I have updated my original blog post. Interested readers can click here to see my analysis.

Finally, please note that articles from the Journal of Medical Ethics special issue on circumcision are (at long last) beginning to appear online. The print issue will follow shortly. Also be sure to see this recent critique of the AAP in a thoughtful book by JME contributor and medical historian Dr. Robert Darby, entitled: “The Sorcerer’s Apprentice: Why Can’t the US Stop Circumcising Boys?”

— BDE 

Flu researchers impartially decide dangerous flu research is safe

Flu researchers have looked deeply at their own field, and decided that everything they were doing is all fine. Where the potentially hideously dangerous H5N1 bird-flu virus is concerned,

They said that the benefits of the research in preventing and dealing with a future flu pandemic outweigh the risks of an accidental leak of the mutant virus from a laboratory or the deliberate attempt to create deadly strains of flu by terrorists or rogue governments.

Outside scientists were instead of the opinion that:

[…] if airborne transmission became possible it would lead to a deadly flu pandemic killing millions of people because most of the individuals who are known to have been infected with H5N1 die from the virus.

and even other virologists claim:

The risks are clear for all to see and the benefits are qualitative, and that’s rather weak. Civil scientists are not here to increase the risk from microbes. We are not here to make the microbial world more dangerous.

It’s quite simple here. The flu researchers are not evil people, and they certainly believe they’re doing the right thing. But it is blatantly clear that people inside their own research community, are unavoidably biased in assessing the risks of their own research.

When you think you’re doing the right thing, but all outsiders are screaming for you to stop, that is the moment to step outside your own self-assessment and stop doing what you’re doing, and think deeply before continuing.