Skip to content

Owen Schaefer’s posts

Three Observations about Justifying AI

Written by:  Anantharaman Muralidharan, G Owen Schaefer, Julian Savulescu
Cross-posted with the Journal of Medical Ethics blog

Consider the following kind of medical AI. It consists of 2 parts. The first part consists of a core deep machine learning algorithm. These blackbox algorithms may be more accurate than human judgment or interpretable algorithms, but are notoriously opaque in terms of telling us on what basis the decision was made. The second part consists of an algorithm that generates a post-hoc medical justification for the core algorithm. Algorithms like this are already available for visual classification. When the primary algorithm identifies a given bird as a Western Grebe, the secondary algorithm provides a justification for this decision: “because the bird has a long white neck, pointy yellow beak and red eyes”. The justification goes beyond just a description of the provided image or a definition of the bird in question, and is able to provide a justification that links the information provided in the image to the features that distinguish the bird. The justification is also sufficiently fine grained as to account for why the bird in the picture is not a similar bird like the Laysan Albatross. It is not hard to imagine that such an algorithm would soon be available for medical decisions if not already so. Let us call this type of AI “justifying AI” to distinguish it from algorithms which try, to some degree or other, to wear their inner workings on their sleeves.

Possibly, it might turn out that the medical justification given by the justifying AI sounds like pure nonsense. Rich Caruana et al present a  case whereby asthmatics were deemed less at risk of dying by pneumonia. As a result, it prescribed less aggressive treatments for asthmatics who contracted pneumonia. The key mistake the primary algorithm made was that it failed to account for the fact that asthmatics who contracted pneumonia had better outcomes only because they tended to receive more aggressive treatment in the first place. Even though the algorithm was more accurate on average, it was systematically mistaken about one subgroup. When incidents like these occur, one option here is to disregard the primary AI’s recommendation. The rationale here is that we could hope to do better than by relying on the blackbox alone by intervening in cases where the blackbox gives an implausible recommendation/prediction. The aim of having justifying AI is to make it easier to identify when the primary AI is misfiring. After all, we can expect trained physicians to recognise a good medical justification when they see one and likewise recognise bad justifications. The thought here is that the secondary algorithm generating a bad justification is good evidence that the primary AI has misfired.

The worry here is that our existing medical knowledge is notoriously incomplete in places. It is to be expected that there will be cases where the optimal decision vis a vis patient welfare does not have a plausible medical justification at least based on our current medical knowledge. For instance, Lithium is used as a mood stabilizer but the reason why this works is poorly understood. This means that ignoring the blackbox whenever a plausible justification in terms of our current medical knowledge is unavailable will tend to lead to less optimal decisions. Below are three observations that we might make about this type of justifying AI.

Read More »Three Observations about Justifying AI

Vaccine Nationalism: Striking the balance

Written by Owen Schaefer and Julian Savulescu

This is an updated cross-post of an article published in MediCine

On 2 February 2021, the Director-General of the World Health Organization, Dr Tedros Adhanom Ghebreyesus, issues a broadside against COVID-19 vaccine nationalism, calling it “morally indefensible” and “tantamount to medical malpractice at a global scale.” Rich countries representing 16% of the global population have snapped up 60% of the global supply of COVID-19 vaccines. [1] Meanwhile, India, which has only vaccinated 10% of its population, is facing a catastrophic COVID-19 surge.[2] And the COVAX facility – an international effort to get COVID-19 vaccines equitably distributed around the world – currently only projects capacity to offer vaccines amounting to about 3% of participating countries’ populations by mid-year.[3]

COVID-19 vaccine nationalism is not the exception to normal practice. In almost all matters, countries spend the vast majority of budgets on local needs, and only a small fraction of that foreign aid, even when the latter represents much greater need. But the fact that this is normal or expected does not amount to a moral defense.

Here, we explore a question of practical ethics: what is the appropriate extent to which a country can prioritize its own people over those in other countries in the securing of vaccines for COVID-19?

Read More »Vaccine Nationalism: Striking the balance

Plausibility and Same-Sex Marriage

In philosophical discussions, we bring up the notion of plausibility a lot.  “That’s implausible” is a common form of objection, while the converse “That’s plausible” is a common way of offering a sort of cautious sympathy with an argument or claim.  But what exactly do we mean when we claim something is plausible or implausible, and what implications do such claims have?  This question was, for me, most recently prompted by a recent pair of blog posts by Justin Weinberg over at Daily Nous on same-sex marriage.  In the posts and discussion, Weinberg appears sympathetic to an interesting pedagogical principle: instructors may legitimately exclude, discount or dismiss from discussion positions they take to be implausible.*  Further, opposition same-sex marriage is taken to be such an implausible position and thus excludable/discountable/dismissable from classroom debate.  Is this a legitimate line of thought?  I’m inclined against it, and will try to explain why in this post.**  Read More »Plausibility and Same-Sex Marriage