Invited Guest posts

Three Observations about Justifying AI

Written by:  Anantharaman Muralidharan, G Owen Schaefer, Julian Savulescu
Cross-posted with the Journal of Medical Ethics blog

Consider the following kind of medical AI. It consists of 2 parts. The first part consists of a core deep machine learning algorithm. These blackbox algorithms may be more accurate than human judgment or interpretable algorithms, but are notoriously opaque in terms of telling us on what basis the decision was made. The second part consists of an algorithm that generates a post-hoc medical justification for the core algorithm. Algorithms like this are already available for visual classification. When the primary algorithm identifies a given bird as a Western Grebe, the secondary algorithm provides a justification for this decision: “because the bird has a long white neck, pointy yellow beak and red eyes”. The justification goes beyond just a description of the provided image or a definition of the bird in question, and is able to provide a justification that links the information provided in the image to the features that distinguish the bird. The justification is also sufficiently fine grained as to account for why the bird in the picture is not a similar bird like the Laysan Albatross. It is not hard to imagine that such an algorithm would soon be available for medical decisions if not already so. Let us call this type of AI “justifying AI” to distinguish it from algorithms which try, to some degree or other, to wear their inner workings on their sleeves.

Possibly, it might turn out that the medical justification given by the justifying AI sounds like pure nonsense. Rich Caruana et al present a  case whereby asthmatics were deemed less at risk of dying by pneumonia. As a result, it prescribed less aggressive treatments for asthmatics who contracted pneumonia. The key mistake the primary algorithm made was that it failed to account for the fact that asthmatics who contracted pneumonia had better outcomes only because they tended to receive more aggressive treatment in the first place. Even though the algorithm was more accurate on average, it was systematically mistaken about one subgroup. When incidents like these occur, one option here is to disregard the primary AI’s recommendation. The rationale here is that we could hope to do better than by relying on the blackbox alone by intervening in cases where the blackbox gives an implausible recommendation/prediction. The aim of having justifying AI is to make it easier to identify when the primary AI is misfiring. After all, we can expect trained physicians to recognise a good medical justification when they see one and likewise recognise bad justifications. The thought here is that the secondary algorithm generating a bad justification is good evidence that the primary AI has misfired.

The worry here is that our existing medical knowledge is notoriously incomplete in places. It is to be expected that there will be cases where the optimal decision vis a vis patient welfare does not have a plausible medical justification at least based on our current medical knowledge. For instance, Lithium is used as a mood stabilizer but the reason why this works is poorly understood. This means that ignoring the blackbox whenever a plausible justification in terms of our current medical knowledge is unavailable will tend to lead to less optimal decisions. Below are three observations that we might make about this type of justifying AI.

Continue reading

Guest Post: Pandemic Ethics. Social Justice Demands Mass Surveillance: Social Distancing, Contact Tracing and COVID-19

Written by: Bryce Goodman

The spread of COVID-19 presents a number of ethical dilemmas. Should ventilators only be used to treat those who are most likely to recover from infection? How should violators of quarantine be punished? What is the right balance between protecting individual privacy and reducing the virus’ spread?

Most of the mitigation strategies pursued today (including in the US and UK) rely primarily on lock-downs or “social distancing” and not enough on contact tracing — the use of location data to identify who an infected individual may have come into contact with and infected. This balance prioritizes individual privacy above public health. But contact tracing will not only protect our overall welfare. It can also help address the disproportionately negative impact social distancing is having on our least well off.
Contact tracing “can achieve epidemic control if used by enough people,” says a recent paper published in Science. “By targeting recommendations to only those at risk, epidemics could be contained without need for mass quarantines (‘lock-downs’) that are harmful to society.” Once someone has tested positive for a virus, we can use that person’s location history to deduce whom they may have “contacted” and infected. For example, we might find that 20 people were in close proximity and 15 have now tested positive for the virus. Contact tracing would allow us to identify and test the other 5 before they spread the virus further.
The success of contact tracing will largely depend on the accuracy and ubiquity of a widespread testing program. Evidence thus far suggests that countries with extensive testing and contact tracing are able to avoid or relax social distancing restrictions in favor of more targeted quarantines.

Continue reading

A Proposal for Addressing Language Inequality in Academia

Written by Anri Asagumo

Oxford Uehiro/St Cross Scholar

Although more and more people see the importance of diversity in academia, language diversity is one type of diversity that seems to be diminishing: English is increasingly dominant in both areas. I would like to argue that people who are born and raised in an English-speaking country should be required to acquire a second language to the level they can write a rudimentary paper and give a presentation in that language in order to apply for international conferences and submit papers to international journals. The purpose of this requirement would be to address the significant inequality between native English-speakers and others. I focus on academia here, but ideally the same thing should be applied to the business world, too. Continue reading

Cross Post: Re: Nudges in a Post-truth World 

Guest Post: Nathan Hodson

This article originally appeared on the Journal of Medical Ethics Blog 

In a recent article in the Journal of Medical EthicsNeil Levy has developed a concept of “nudges to reason,” offering a new tool for those trying to reconcile medical ethics with the application of behavioural psychological research – a practice known as nudging. Very roughly, nudging means adjusting the way choices are presented to the public in order to promote certain decisions.

As Levy notes, some people are concerned that nudges present a threat to autonomy. Attempts at reconciling nudges with ethics, then, are important because nudging in healthcare is here to stay but we need to ensure it is used in ways that respect autonomy (and other moral principles). Continue reading

Guest Post: Crispr Craze and Crispr Cares

Written by Robert Ranisch, Institute for Ethics and History of Medicine, University of Tuebingen

@RobRanisch

Newly discovered tools for the targeted editing of the genome have been generating talk of a revolution in gene technology for the last five years. The CRISPR/Cas9-method draws most of the attention by enabling a more simple and precise, cheaper and quicker modification of genes in a hitherto unknown measure. Since these so-called molecular scissors can be set to work in just about all organisms, hardly a week goes by without headlines regarding the latest scientific research: Genome editing could keep vegetables looking fresh, eliminate malaria from disease-carrying mosquitoes, replace antibiotics or bring mammoths back to life.

Naturally, the greatest hopes are put into its potential for various medical applications. Despite the media hype, there are no ready-to-use CRISPR gene therapies. However, the first clinical studies are under way in China and have been approved in the USA. Future therapy methods might allow eradicating hereditary illnesses, conquering cancer, or even cure HIV/AIDS. Just this May, results from experiments on mice gave reason to hope for this. In a similar vein, germline intervention is being reconsidered as a realistic option now, although it had long been considered taboo because of how its (side)effects are passed down the generations. Continue reading

Invited Guest Post: Healthcare professionals need empathy too!

Written by Angeliki Kerasidou & Ruth Horn, The Ethox Centre, Nuffield Department of Population Health, University of Oxford

 

Recently, a number of media reports and personal testimonies have drawn attention to the intense physical and emotional stress to which doctors and nurses working in the NHS are exposed on a daily basis. Medical professionals are increasingly reporting feelings of exhaustion, depression, and even suicidal thoughts. Long working hours, decreasing numbers of staff, budget cuts and the lack of time to address patients’ needs are mentioned as some of the contributing factors (Campbell, 2015; The Guardian, 2016). Such factors have been linked with loss of empathy towards patients and, in some cases, with gross failures in their care (Francis, 2013). Continue reading

Cross Post: Women’s-Only Swimming Hours: Accommodation Is Not Discrimination

Written by Miriam Rosenbaum and Sajda Ouachtouki 

This article was originally published in First Things.

Women’s-only hours at swimming pools are nothing new. Many secular institutions have long hosted separate swim hours for women and girls who, for reasons of faith or personal preference, desire to swim without the presence of men. The list includes Barnard College, Harvard University, Yale University, and swim clubs, JCCs, and YMCAs across the country. Recently, women’s-only swimming hours have become a topic of debate, especially in New York, where promoters of liberal secularist ideology (including the editorial page of the New York Times) are campaigning against women’s-only hours at a public swimming pool on Bedford Avenue in Brooklyn. They claim that women’s-only swimming hours, even for a small portion of the day, must be abolished in the interest of “general fairness and equal access” and to avoid “discrimination” in favor of certain religions. Continue reading

Guest Post: Scientists aren’t always the best people to evaluate the risks of scientific research

Written by Simon Beard, Research Associate at the Center for the Study of Existential Risk, University of Cambridge

How can we study the pathogens that will be responsible for future global pandemics before they have happened? One way is to find likely candidates currently in the wild and genetically engineer them so that they gain the traits that will be necessary for them to cause a global pandemic.

Such ‘Gain of Function’ research that produces ‘Potential Pandemic Pathogens’ (GOF-PPP for short) is highly controversial. Following some initial trails looking at what kinds of mutations were needed to make avian influenza transmissible in ferrets, a moratorium has been imposed on further research whilst the risks and benefits associated with it are investigated. Continue reading

A Second Response to Professor Neil Levy’s Leverhulme Lectures.

Written by Richard Ngo , an undergraduate student in Computer Science and Philosophy at the University of Oxford.

Neil Levy’s Leverhulme Lectures start from the admirable position of integrating psychological results and philosophical arguments, with the goal of answering two questions:

(1) are we (those of us with egalitarian explicit beliefs but conflicting implicit attitudes) racist?

(2) when those implicit attitudes cause actions which seem appropriately to be characterised as racist (sexist, homophobic…), are we morally responsible for these actions? Continue reading

Why it matters if people are racist: A Response to Neil Levy’s Leverhulme Lectures

Author: Fergus Peace, BPhil student, University of Oxford

Podcasts of Prof. Levy’s Leverhulme lectures are available here:

http://media.philosophy.ox.ac.uk/uehiro/HT16_LL_LEVY1.mp3

and http://media.philosophy.ox.ac.uk/uehiro/HT16_LL_LEVY2.mp3

It’s only a little more than forty years ago that George Wallace won the contest for Governor of Alabama by running ads with slogans like “Wake up Alabama! Blacks vow to take over Alabama” and “Do you want the black bloc electing your governor?” That year, 1970, 50% of people surveyed in the American South said they would never – under any circumstances – vote for a black President. By 2012, that number was down by 8%, and it’s hard to deny that open, avowed racism has been in steep decline for most of the last forty years. But even as people’s overt commitment to racism declines, experiments still show that black candidates are less likely to be given job interviews than equally qualified white candidates; African-Americans are still disproportionately likely to be imprisoned, or shot by police.

So what’s going on? That is the motivating puzzle of Professor Neil Levy’s Leverhulme Lectures, and his answer centres on an increasingly well-known but still very disturbing psychological phenomenon: implicit bias. There are a range of tests which have uncovered evidence of implicit negative attitudes held – by a majority of white Americans, but a sizeable number of black Americans too – against black people. Harvard University’s ‘Project Implicit’ has a series of Implicit Association Tests (IATs); Keith Payne, among others, has developed tests of what he calls the Affect Misattribution Procedure (AMP). IATs ask us to sort faces and words according to their race and ‘valence’, and we find that task much easier when we have to associate black faces with negative words than we do otherwise. Tests of the AMP ask subjects to rate the pleasantness of an image which is entirely meaningless to them – a Chinese character, for people who don’t speak Chinese – and find that they rate it less pleasant if they’re shown an image of a black face immediately beforehand.

There’s no doubt these results are unsettling. (If you want to do an IAT online, as you should, you have to agree to receiving results you might disagree or be uncomfortable with before you proceed.) And they’re not just subconscious attitudes which are uncomfortable but insignificant; implicit bias as measured by these various tests is correlated with being less likely to vote for Barack Obama, and more likely to blame the black community for violence in protests against police brutality. Tests in virtual shooting ranges also reveal that it correlates with being more likely to shoot unarmed black men when given the task of shooting only those carrying weapons. Implicit biases certainly seem to cause, at least partly, racist actions and patterns of behaviour, like being quicker to shoot at unarmed black people and less likely to invite them for job interviews.

Professor Levy’s lectures grappled with two questions about these attitudes: first, do they make you a racist; and second, are you morally responsible for actions caused by your implicit biases? If you, like me, abhor racism and make that abhorrence at least some part of your political and social identity, but nonetheless come away with a “moderate automatic preference for European … compared to African” on the race IAT, then are you – protestations to the contrary – a racist? His answer to this question in the first lecture, based on the current state of conceptual investigation of what racism is and empirical evidence about the character of implicit biases, was a qualified no: they don’t clearly count as beliefs, or even as feelings, in a way that could let us confidently call people racist just because they possess them.

The second question is similarly complex. When interviewers prefer white applicants over equally qualified black ones, due to their implicit attitudes, are they responsible for the racist character of that action? Levy focused largely on the ‘control theory’ of moral responsibility, which says that you’re responsible for an action only if you exercise sufficient control over it. Levy’s answer to this question is a pretty clear no: implicit attitudes don’t have the right sort of attributes (in particular, reliable responsiveness to reasons and evidence) to count as giving you control over the actions they cause.

I find it very hard to disagree with the core of Professor Levy’s arguments on his two questions. The points I want to make in response come from a different direction, because after listening to the two lectures I’m not convinced that these are the important questions to be asking about implicit bias.

Continue reading

Authors

Affiliations