Skip to content

Ethics

2022 Uehiro Lectures : Ethics and AI, Peter Railton. In Person and Hybrid

Ethics and Artificial Intelligence Professor Peter Railton, University of Michigan May 9, 16, and 23 (In person and hybrid. booking links below) Abstract: Recent, dramatic advancement in the capabilities of artificial intelligence (AI) raise a host of ethical questions about the development and deployment of AI systems.  Some of these are questions long recognized as… Read More »2022 Uehiro Lectures : Ethics and AI, Peter Railton. In Person and Hybrid

AI and the Transition Paradox

When Will AI Exceed Human Performance? Evidence from AI Experts. https://arxiv.org/abs/1705.08807

by Aksel Braanen Sterri

The most important development in human history will take place not too far in the future. Artificial intelligence, or AI for short, will become better (and cheaper) than humans at most tasks. This will generate enormous wealth that can be used to fill human needs.

However, since most humans will not be able to compete with AI, there will be little demand for ordinary people’s labour-power. The immediate effect of a world without work is that people will lose their primary source of income and whatever meaning, mastery, sense of belonging and status they get from their work. Our collective challenge is to find meaning and other ways to reliably get what we need in this new world.

Read More »AI and the Transition Paradox

Rethinking ‘Higher’ and ‘Lower’ Pleasures

by Ben Davies

One of John Stuart Mill’s most well-known claims concerns the distinction between higher and lower pleasures. Higher pleasures—which are, roughly, ‘mental’ pleasures—are, says Mill, always preferable to lower pleasures—the pleasures of the body.

In Mill’s rendering, competent judges—those who have experience of both higher and lower pleasures—will choose a higher pleasure over a lower pleasure “even though knowing it to be attended with a greater amount of discontent” and “would not resign it for any quantity of the other [lower] pleasure which their nature is capable of”.

There are two ways we might interpret this claim:

Read More »Rethinking ‘Higher’ and ‘Lower’ Pleasures

New Publication: ‘Overriding Adolescent Refusals of Treatment’

Written by Anthony Skelton, Lisa Forsberg, and Isra Black

Consider the following two cases:

Cynthia’s blood transfusion. Cynthia is 16 years of age. She is hit by a car on her way to school. She is rushed to hospital. She sustains serious, life-threatening injuries and loses a lot of blood. Her physicians conclude that she needs a blood transfusion in order to survive. Physicians ask for her consent to this course of treatment. Cynthia is intelligent and thoughtful. She considers, understands and appreciates her medical options. She is deemed to possess the capacity to decide on her medical treatment. She consents to the blood transfusion.

Nathan’s blood transfusion. Nathan is 16 years of age. He has Crohn’s disease. He is admitted to hospital with lower gastrointestinal bleeding. According to the physicians in charge of his care, the bleeding poses a significant threat to his health and to his life. His physicians conclude that a blood transfusion is his best medical option. Nathan is intelligent and thoughtful. He considers, understands and appreciates his medical options. He is deemed to possess the capacity to decide on his medical treatment. He refuses the blood transfusion.

Under English Law, Cynthia’s consent has the power to permit the blood transfusion offered by her physicians. Her consent is considered to be normatively (and legally) determinative. However, Nathan’s refusal is not normatively (or legally) determinative. Nathan’s refusal can be overridden by consent to the blood transfusion of either a parent or court. These parties share (with Nathan) the power to consent to his treatment and thereby make it lawful for his physicians to provide it.

Read More »New Publication: ‘Overriding Adolescent Refusals of Treatment’

Robert Audi on Moral Creditworthiness and Moral Obligation

by Roger Crisp

On Tuesday 8 March, Professor Robert Audi, John A. O’Brien Professor of Philosophy at the University of Notre Dame, gave a Public Lecture for the Oxford Uehiro Centre for Practical Ethics. The event was held in the Lecture Room at the Faculty of Philosophy, University of Oxford and was hybrid, the audience numbering around 60 overall.Read More »Robert Audi on Moral Creditworthiness and Moral Obligation

The Aliens Are Coming

UFO against the sky. Free public domain CC0 photo

By Charles Foster

It’s said that 2022 is going to be a bumper year for UFO revelations. Secret archives are going to be opened and the skies are going to be probed as never before for signs of extraterrestrial life.

This afternoon we might be presented with irrefutable evidence not just of life beyond the Earth, but of intelligences comparable in power and subtlety to our own. What then? Would it change our view of ourselves and the universe we inhabit? If so, how? Would it change our behaviour? If so how?

Much would depend, no doubt, on what we knew or supposed about the nature and intentions of the alien intelligences. If they seemed hostile, intent on colonising Planet Earth and enslaving us, our reactions would be fairly predictable. But what if the reports simply disclosed the existence of other intelligences, together with the fact that those intelligences knew about and were interested in us?Read More »The Aliens Are Coming

Three Observations about Justifying AI

Written by:  Anantharaman Muralidharan, G Owen Schaefer, Julian Savulescu
Cross-posted with the Journal of Medical Ethics blog

Consider the following kind of medical AI. It consists of 2 parts. The first part consists of a core deep machine learning algorithm. These blackbox algorithms may be more accurate than human judgment or interpretable algorithms, but are notoriously opaque in terms of telling us on what basis the decision was made. The second part consists of an algorithm that generates a post-hoc medical justification for the core algorithm. Algorithms like this are already available for visual classification. When the primary algorithm identifies a given bird as a Western Grebe, the secondary algorithm provides a justification for this decision: “because the bird has a long white neck, pointy yellow beak and red eyes”. The justification goes beyond just a description of the provided image or a definition of the bird in question, and is able to provide a justification that links the information provided in the image to the features that distinguish the bird. The justification is also sufficiently fine grained as to account for why the bird in the picture is not a similar bird like the Laysan Albatross. It is not hard to imagine that such an algorithm would soon be available for medical decisions if not already so. Let us call this type of AI “justifying AI” to distinguish it from algorithms which try, to some degree or other, to wear their inner workings on their sleeves.

Possibly, it might turn out that the medical justification given by the justifying AI sounds like pure nonsense. Rich Caruana et al present a  case whereby asthmatics were deemed less at risk of dying by pneumonia. As a result, it prescribed less aggressive treatments for asthmatics who contracted pneumonia. The key mistake the primary algorithm made was that it failed to account for the fact that asthmatics who contracted pneumonia had better outcomes only because they tended to receive more aggressive treatment in the first place. Even though the algorithm was more accurate on average, it was systematically mistaken about one subgroup. When incidents like these occur, one option here is to disregard the primary AI’s recommendation. The rationale here is that we could hope to do better than by relying on the blackbox alone by intervening in cases where the blackbox gives an implausible recommendation/prediction. The aim of having justifying AI is to make it easier to identify when the primary AI is misfiring. After all, we can expect trained physicians to recognise a good medical justification when they see one and likewise recognise bad justifications. The thought here is that the secondary algorithm generating a bad justification is good evidence that the primary AI has misfired.

The worry here is that our existing medical knowledge is notoriously incomplete in places. It is to be expected that there will be cases where the optimal decision vis a vis patient welfare does not have a plausible medical justification at least based on our current medical knowledge. For instance, Lithium is used as a mood stabilizer but the reason why this works is poorly understood. This means that ignoring the blackbox whenever a plausible justification in terms of our current medical knowledge is unavailable will tend to lead to less optimal decisions. Below are three observations that we might make about this type of justifying AI.

Read More »Three Observations about Justifying AI

Your eyes will be discontinued: what are the long-term responsibilities for implants?

by Anders Sandberg

What do you do when your bionic eyes suddenly become unsupported and you go blind again? Eliza Strickland and Mark Harris have an excellent article in IEEE Spectrum about the problems caused when the bionics company Second Sight got into economic trouble. Patients with their Argus II eyes found that upgrades could not be made and broken devices not replaced. What kind of  responsibility does a company have for the continued function of devices that become part of people?

Read More »Your eyes will be discontinued: what are the long-term responsibilities for implants?

Should Vaccination Status Affect ICU Admission?

By Ben Davies and Joshua Parker

Intensive care units around the country are full, with a disproportionate number of patients who have not had a single COVID-19 vaccination. Doctors have been vocal in describing the emotional cost of caring for critically unwell patients suffering from the effects of a virus for which there is an effective vaccine. Indeed, one doctor has gone so far as to argue that the unvaccinated should contribute financially for their care. It is easy to understand doctors’ frustrations given the relentless pressures and difficult decisions they’ve had to face. In the face of very real dilemmas about how to allocate scarce ICU beds, some might wonder whether the NHS should adopt a policy of ‘no vaccine, no ICU bed’.

Read More »Should Vaccination Status Affect ICU Admission?

Impersonality and Non-identity: A Response to Bramble

by Roger Crisp

Consider the following case, from David Boonin:

Wilma. Wilma has decided to have a baby. She goes to her doctor for a checkup and the doctor tells her that…as things now stand, if she conceives, her child will have a disability. . . that clearly has a substantially negative impact on a person’s quality of life. . . [but is not] so serious as to render the child’s life worse than no life at all. . . .[But] Wilma can prevent this from happening. If she takes a tiny pill once a day for two months before conceiving, her child will be perfectly healthy. The pill is easy to take, has no side effects, and will be paid for by her health insurance. . . .Wilma decides that having to take a pill once a day for two months before conceiving is a bit too inconvenient and so chooses to throw the pills away and conceive at once. As a result of this choice, her child is born [with the disability].

Read More »Impersonality and Non-identity: A Response to Bramble