Skip to content

Bioethics

The Homeric Power of Advance Directives

By Charles Foster

[Image: Ulysses and the Sirens: John William Waterhouse, 1891: National Gallery of Victoria, Melbourne]

We shouldn’t underestimate Homer’s hold on us. Whether or not we’ve ever read him, he created many of our ruling memes.

I don’t think it’s fanciful (though it might be ambitious) to suggest that he, and the whole heroic ethos, are partly responsible for our uncritical adoption of a model of autonomy which doesn’t do justice to the sort of creatures we really are. That’s a big claim. I can’t justify it here. But one manifestation of that adoption is our exaggerated respect for advance directives – declarations made when one is capacitous about how one would like to be treated if incapacitous, and which are binding if incapacity supervenes if (in English law) the declaration is ‘valid and applicable.’ 1.

I suspect that some of this respect comes from the earliest and most colourful advance directive story ever: Odysseus and the Sirens.Read More »The Homeric Power of Advance Directives

Cross Post: Tech firms are making computer chips with human cells – is it ethical?

Written by Julian Savulescu, Chris Gyngell, Tsutomu Sawai
Cross-posted with The Conversation

Shutterstock

Julian Savulescu, University of Oxford; Christopher Gyngell, The University of Melbourne, and Tsutomu Sawai, Hiroshima University

The year is 2030 and we are at the world’s largest tech conference, CES in Las Vegas. A crowd is gathered to watch a big tech company unveil its new smartphone. The CEO comes to the stage and announces the Nyooro, containing the most powerful processor ever seen in a phone. The Nyooro can perform an astonishing quintillion operations per second, which is a thousand times faster than smartphone models in 2020. It is also ten times more energy-efficient with a battery that lasts for ten days.

A journalist asks: “What technological advance allowed such huge performance gains?” The chief executive replies: “We created a new biological chip using lab-grown human neurons. These biological chips are better than silicon chips because they can change their internal structure, adapting to a user’s usage pattern and leading to huge gains in efficiency.”

Another journalist asks: “Aren’t there ethical concerns about computers that use human brain matter?”

Although the name and scenario are fictional, this is a question we have to confront now. In December 2021, Melbourne-based Cortical Labs grew groups of neurons (brain cells) that were incorporated into a computer chip. The resulting hybrid chip works because both brains and neurons share a common language: electricity.

Read More »Cross Post: Tech firms are making computer chips with human cells – is it ethical?

Exercise, Population Health and Paternalism

Written by Rebecca Brown

 

The NHS is emphatic in its confidence that exercise is highly beneficial for health. From their page on the “Benefits of exercise” come statements like:

“Step right up! It’s the miracle cure we’ve all been waiting for”

“This is no snake oil. Whatever your age, there’s strong scientific evidence that being physically active can help you lead a healthier and happier life”

“Given the overwhelming evidence, it seems obvious that we should all be physically active. It’s essential if you want to live a healthy and fulfilling life into old age”.

Setting aside any queries about the causal direction of the relationship between exercise and good health, or the precise effect size of the benefits exercise offers, it at least seems that the NHS is convinced that it is a remarkably potent health promotion tool.Read More »Exercise, Population Health and Paternalism

The Aliens Are Coming

UFO against the sky. Free public domain CC0 photo

By Charles Foster

It’s said that 2022 is going to be a bumper year for UFO revelations. Secret archives are going to be opened and the skies are going to be probed as never before for signs of extraterrestrial life.

This afternoon we might be presented with irrefutable evidence not just of life beyond the Earth, but of intelligences comparable in power and subtlety to our own. What then? Would it change our view of ourselves and the universe we inhabit? If so, how? Would it change our behaviour? If so how?

Much would depend, no doubt, on what we knew or supposed about the nature and intentions of the alien intelligences. If they seemed hostile, intent on colonising Planet Earth and enslaving us, our reactions would be fairly predictable. But what if the reports simply disclosed the existence of other intelligences, together with the fact that those intelligences knew about and were interested in us?Read More »The Aliens Are Coming

Three Observations about Justifying AI

Written by:  Anantharaman Muralidharan, G Owen Schaefer, Julian Savulescu
Cross-posted with the Journal of Medical Ethics blog

Consider the following kind of medical AI. It consists of 2 parts. The first part consists of a core deep machine learning algorithm. These blackbox algorithms may be more accurate than human judgment or interpretable algorithms, but are notoriously opaque in terms of telling us on what basis the decision was made. The second part consists of an algorithm that generates a post-hoc medical justification for the core algorithm. Algorithms like this are already available for visual classification. When the primary algorithm identifies a given bird as a Western Grebe, the secondary algorithm provides a justification for this decision: “because the bird has a long white neck, pointy yellow beak and red eyes”. The justification goes beyond just a description of the provided image or a definition of the bird in question, and is able to provide a justification that links the information provided in the image to the features that distinguish the bird. The justification is also sufficiently fine grained as to account for why the bird in the picture is not a similar bird like the Laysan Albatross. It is not hard to imagine that such an algorithm would soon be available for medical decisions if not already so. Let us call this type of AI “justifying AI” to distinguish it from algorithms which try, to some degree or other, to wear their inner workings on their sleeves.

Possibly, it might turn out that the medical justification given by the justifying AI sounds like pure nonsense. Rich Caruana et al present a  case whereby asthmatics were deemed less at risk of dying by pneumonia. As a result, it prescribed less aggressive treatments for asthmatics who contracted pneumonia. The key mistake the primary algorithm made was that it failed to account for the fact that asthmatics who contracted pneumonia had better outcomes only because they tended to receive more aggressive treatment in the first place. Even though the algorithm was more accurate on average, it was systematically mistaken about one subgroup. When incidents like these occur, one option here is to disregard the primary AI’s recommendation. The rationale here is that we could hope to do better than by relying on the blackbox alone by intervening in cases where the blackbox gives an implausible recommendation/prediction. The aim of having justifying AI is to make it easier to identify when the primary AI is misfiring. After all, we can expect trained physicians to recognise a good medical justification when they see one and likewise recognise bad justifications. The thought here is that the secondary algorithm generating a bad justification is good evidence that the primary AI has misfired.

The worry here is that our existing medical knowledge is notoriously incomplete in places. It is to be expected that there will be cases where the optimal decision vis a vis patient welfare does not have a plausible medical justification at least based on our current medical knowledge. For instance, Lithium is used as a mood stabilizer but the reason why this works is poorly understood. This means that ignoring the blackbox whenever a plausible justification in terms of our current medical knowledge is unavailable will tend to lead to less optimal decisions. Below are three observations that we might make about this type of justifying AI.

Read More »Three Observations about Justifying AI

Your eyes will be discontinued: what are the long-term responsibilities for implants?

by Anders Sandberg

What do you do when your bionic eyes suddenly become unsupported and you go blind again? Eliza Strickland and Mark Harris have an excellent article in IEEE Spectrum about the problems caused when the bionics company Second Sight got into economic trouble. Patients with their Argus II eyes found that upgrades could not be made and broken devices not replaced. What kind of  responsibility does a company have for the continued function of devices that become part of people?

Read More »Your eyes will be discontinued: what are the long-term responsibilities for implants?

Impersonality and Non-identity: A Response to Bramble

by Roger Crisp

Consider the following case, from David Boonin:

Wilma. Wilma has decided to have a baby. She goes to her doctor for a checkup and the doctor tells her that…as things now stand, if she conceives, her child will have a disability. . . that clearly has a substantially negative impact on a person’s quality of life. . . [but is not] so serious as to render the child’s life worse than no life at all. . . .[But] Wilma can prevent this from happening. If she takes a tiny pill once a day for two months before conceiving, her child will be perfectly healthy. The pill is easy to take, has no side effects, and will be paid for by her health insurance. . . .Wilma decides that having to take a pill once a day for two months before conceiving is a bit too inconvenient and so chooses to throw the pills away and conceive at once. As a result of this choice, her child is born [with the disability].

Read More »Impersonality and Non-identity: A Response to Bramble

Cognitive snobbery: The Unacceptable Bias in Favour of the Conscious

There are many corrosive forms of discrimination. But one of the most dangerous is the bias in favour of consciousness, and the consequent denigration of the unconscious.

We see it everywhere. It’s not surprising. For when we’re unreflective – which is most of the time – we tend to suppose that we are our conscious selves, and that the unconscious is a lower, cruder part of us; a seething atavistic sea full of monsters, from which we have mercifully crawled, making our way ultimately to the sunlit uplands of the neocortex, there to gaze gratefully and dismissively back at what we once were.  It’s a picture encoded in our self-congratulatory language: ‘Higher cognitive function’; ‘She’s not to be blamed: she wasn’t fully conscious of the consequences.’: ‘In the Enlightenment we struck off the shackles of superstition and freed our minds to roam.’Read More »Cognitive snobbery: The Unacceptable Bias in Favour of the Conscious

How we got into this mess, and the way out

By Charles Foster

This week I went to the launch of the latest book by Iain McGilchrist, currently best known for his account of the cultural effects of brain lateralisation, The Master and His Emissary: The Divided Brain and the Making of the Western WorldThe new book, The Matter with Things: Our brains, our delusions, and the unmaking of the world is, whatever, you think of the argument, an extraordinary phenomenon. It is enormously long – over 600,000 words packed into two substantial volumes. To publish such a thing denotes colossal confidence: to write it denotes great ambition.

It was commissioned by mainstream publishers who took fright when they saw its size. There is eloquent irony in the rejection on the ground of its length and depth of a book whose main thesis is that reductionism is killing us. It was picked up by Perspectiva press. That was brave. But I’m predicting that Perspectiva’s nerve will be vindicated. It was suggested at the launch that the book might rival or outshine Kant or Hegel. That sounds hysterical. It is a huge claim, but this is a huge book, and the claim might just be right.

Nobody can doubt that we’re in a terrible mess. The planet is on fire; we’re racked with neuroses and governed by charlatans, and we have no idea what sort of creatures we are. We tend to intuit that we are significant animals, but have no language in which to articulate that significance, and the main output of the Academy is to scoff at the intuition.Read More »How we got into this mess, and the way out

Philosophical Fiddling While the World Burns

By Charles Foster

An unprecedented editorial has just appeared in many health journals across the world. It relates to climate change.

The authors say that they are ‘united in recognising that only fundamental and equitable changes to societies will reverse our current trajectory.’

Climate change, they agree, is the major threat to public health. Here is an excerpt: there will be nothing surprising here:

‘The risks to health of increases above 1.5°C are now well established. Indeed, no temperature rise is “safe.” In the past 20 years, heat related mortality among people aged over 65 has increased by more than 50%.Hi gher temperatures have brought increased dehydration and renal function loss, dermatological malignancies, tropical infections, adverse mental health outcomes, pregnancy complications, allergies, and cardiovascular and pulmonary morbidity and mortality. Harms disproportionately affect the most vulnerable, including children, older populations, ethnic minorities, poorer communities, and those with underlying health problems.’Read More »Philosophical Fiddling While the World Burns