The Aliens Are Coming

By Charles Foster

It’s said that 2022 is going to be a bumper year for UFO revelations. Secret archives are going to be opened and the skies are going to be probed as never before for signs of extraterrestrial life.

This afternoon we might be presented with irrefutable evidence not just of life beyond the Earth, but of intelligences comparable in power and subtlety to our own. What then? Would it change our view of ourselves and the universe we inhabit? If so, how? Would it change our behaviour? If so how?

Much would depend, no doubt, on what we knew or supposed about the nature and intentions of the alien intelligences. If they seemed hostile, intent on colonising Planet Earth and enslaving us, our reactions would be fairly predictable. But what if the reports simply disclosed the existence of other intelligences, together with the fact that those intelligences knew about and were interested in us? Continue reading

Three Observations about Justifying AI

Written by:  Anantharaman Muralidharan, G Owen Schaefer, Julian Savulescu
Cross-posted with the Journal of Medical Ethics blog

Consider the following kind of medical AI. It consists of 2 parts. The first part consists of a core deep machine learning algorithm. These blackbox algorithms may be more accurate than human judgment or interpretable algorithms, but are notoriously opaque in terms of telling us on what basis the decision was made. The second part consists of an algorithm that generates a post-hoc medical justification for the core algorithm. Algorithms like this are already available for visual classification. When the primary algorithm identifies a given bird as a Western Grebe, the secondary algorithm provides a justification for this decision: “because the bird has a long white neck, pointy yellow beak and red eyes”. The justification goes beyond just a description of the provided image or a definition of the bird in question, and is able to provide a justification that links the information provided in the image to the features that distinguish the bird. The justification is also sufficiently fine grained as to account for why the bird in the picture is not a similar bird like the Laysan Albatross. It is not hard to imagine that such an algorithm would soon be available for medical decisions if not already so. Let us call this type of AI “justifying AI” to distinguish it from algorithms which try, to some degree or other, to wear their inner workings on their sleeves.

Possibly, it might turn out that the medical justification given by the justifying AI sounds like pure nonsense. Rich Caruana et al present a  case whereby asthmatics were deemed less at risk of dying by pneumonia. As a result, it prescribed less aggressive treatments for asthmatics who contracted pneumonia. The key mistake the primary algorithm made was that it failed to account for the fact that asthmatics who contracted pneumonia had better outcomes only because they tended to receive more aggressive treatment in the first place. Even though the algorithm was more accurate on average, it was systematically mistaken about one subgroup. When incidents like these occur, one option here is to disregard the primary AI’s recommendation. The rationale here is that we could hope to do better than by relying on the blackbox alone by intervening in cases where the blackbox gives an implausible recommendation/prediction. The aim of having justifying AI is to make it easier to identify when the primary AI is misfiring. After all, we can expect trained physicians to recognise a good medical justification when they see one and likewise recognise bad justifications. The thought here is that the secondary algorithm generating a bad justification is good evidence that the primary AI has misfired.

The worry here is that our existing medical knowledge is notoriously incomplete in places. It is to be expected that there will be cases where the optimal decision vis a vis patient welfare does not have a plausible medical justification at least based on our current medical knowledge. For instance, Lithium is used as a mood stabilizer but the reason why this works is poorly understood. This means that ignoring the blackbox whenever a plausible justification in terms of our current medical knowledge is unavailable will tend to lead to less optimal decisions. Below are three observations that we might make about this type of justifying AI.

Continue reading

Your eyes will be discontinued: what are the long-term responsibilities for implants?

by Anders Sandberg

What do you do when your bionic eyes suddenly become unsupported and you go blind again? Eliza Strickland and Mark Harris have an excellent article in IEEE Spectrum about the problems caused when the bionics company Second Sight got into economic trouble. Patients with their Argus II eyes found that upgrades could not be made and broken devices not replaced. What kind of  responsibility does a company have for the continued function of devices that become part of people?

Continue reading

Impersonality and Non-identity: A Response to Bramble

by Roger Crisp

Consider the following case, from David Boonin:

Wilma. Wilma has decided to have a baby. She goes to her doctor for a checkup and the doctor tells her that…as things now stand, if she conceives, her child will have a disability. . . that clearly has a substantially negative impact on a person’s quality of life. . . [but is not] so serious as to render the child’s life worse than no life at all. . . .[But] Wilma can prevent this from happening. If she takes a tiny pill once a day for two months before conceiving, her child will be perfectly healthy. The pill is easy to take, has no side effects, and will be paid for by her health insurance. . . .Wilma decides that having to take a pill once a day for two months before conceiving is a bit too inconvenient and so chooses to throw the pills away and conceive at once. As a result of this choice, her child is born [with the disability].

Continue reading

Cognitive snobbery: The Unacceptable Bias in Favour of the Conscious

There are many corrosive forms of discrimination. But one of the most dangerous is the bias in favour of consciousness, and the consequent denigration of the unconscious.

We see it everywhere. It’s not surprising. For when we’re unreflective – which is most of the time – we tend to suppose that we are our conscious selves, and that the unconscious is a lower, cruder part of us; a seething atavistic sea full of monsters, from which we have mercifully crawled, making our way ultimately to the sunlit uplands of the neocortex, there to gaze gratefully and dismissively back at what we once were.  It’s a picture encoded in our self-congratulatory language: ‘Higher cognitive function’; ‘She’s not to be blamed: she wasn’t fully conscious of the consequences.’: ‘In the Enlightenment we struck off the shackles of superstition and freed our minds to roam.’ Continue reading

How we got into this mess, and the way out

By Charles Foster

This week I went to the launch of the latest book by Iain McGilchrist, currently best known for his account of the cultural effects of brain lateralisation, The Master and His Emissary: The Divided Brain and the Making of the Western WorldThe new book, The Matter with Things: Our brains, our delusions, and the unmaking of the world is, whatever, you think of the argument, an extraordinary phenomenon. It is enormously long – over 600,000 words packed into two substantial volumes. To publish such a thing denotes colossal confidence: to write it denotes great ambition.

It was commissioned by mainstream publishers who took fright when they saw its size. There is eloquent irony in the rejection on the ground of its length and depth of a book whose main thesis is that reductionism is killing us. It was picked up by Perspectiva press. That was brave. But I’m predicting that Perspectiva’s nerve will be vindicated. It was suggested at the launch that the book might rival or outshine Kant or Hegel. That sounds hysterical. It is a huge claim, but this is a huge book, and the claim might just be right.

Nobody can doubt that we’re in a terrible mess. The planet is on fire; we’re racked with neuroses and governed by charlatans, and we have no idea what sort of creatures we are. We tend to intuit that we are significant animals, but have no language in which to articulate that significance, and the main output of the Academy is to scoff at the intuition. Continue reading

Philosophical Fiddling While the World Burns

By Charles Foster

An unprecedented editorial has just appeared in many health journals across the world. It relates to climate change.

The authors say that they are ‘united in recognising that only fundamental and equitable changes to societies will reverse our current trajectory.’

Climate change, they agree, is the major threat to public health. Here is an excerpt: there will be nothing surprising here:

‘The risks to health of increases above 1.5°C are now well established. Indeed, no temperature rise is “safe.” In the past 20 years, heat related mortality among people aged over 65 has increased by more than 50%.Hi gher temperatures have brought increased dehydration and renal function loss, dermatological malignancies, tropical infections, adverse mental health outcomes, pregnancy complications, allergies, and cardiovascular and pulmonary morbidity and mortality. Harms disproportionately affect the most vulnerable, including children, older populations, ethnic minorities, poorer communities, and those with underlying health problems.’ Continue reading

What If Stones Have Souls?

By Charles Foster

Over the 40,000 years or so of the history of behaviourally modern humans, the overwhelming majority of generations have been, so far as we can see, animist. They have, that is, believed that all or most things, human and otherwise, have some sort of soul.

We can argue about the meaning of ‘soul’, and about the relationship of ‘soul’ to consciousness, but most would agree that whatever ‘soul’ and ‘consciousness’ mean, and however they are related, there is some intimate and necessary connection between them – even if they are not identical.

Consciousness is plainly not a characteristic unique to humans. Indeed the better we get at looking for consciousness, the more we find it. The universe seems to be a garden in which consciousness springs up very readily. Continue reading

COVID: Why We Should Stop Testing in Schools

Dominic Wilkinson, University of Oxford; Jonathan Pugh, University of Oxford, and Julian Savulescu, University of Oxford

Education Secretary Gavin Williamson has announced the end of school “bubbles” in England from July 19, following the news that 375,000 children did not attend school for COVID-related reasons in June.

Under the current system, if a schoolchild becomes infected with the coronavirus, pupils who have been in close contact with them have to self-isolate for ten days. In some cases, whole year groups may have to self-isolate.

Such mass self-isolation is hugely disruptive. Yet despite the clamour to switch to other protective measures, such as rapid testing of pupils who have been in close contact with an infected pupil, the public service union Unison has supported self-isolation as “one of the proven ways to keep cases under control”. Continue reading

Compromising On the Right Not to Know?

Written by Ben Davies

Personal autonomy is the guiding light of contemporary clinical and research practice, at least in the UK. Whether someone is a potential participant in a research trial, or a patient being treated by a medical professional, the gold standard, violated only in extremis, is that they should decide for themselves whether to go ahead with a particular intervention, on the basis of as much relevant information as possible.

Roger Crisp recently discussed Professor Gopal Sreenivasan’s New Cross seminar, which argued against a requirement for informational disclosure in consenting to research participation. Sreenivasan’s argument was, at least in its first part, based on a straightforward appeal to autonomy: if autonomy is what matters most, I should have the right to autonomously refuse information.

I have previously outlined a related argument in a clinical context, in which I sought to undermine arguments against a putative ‘Right Not to Know’ that are themselves based in autonomy. In brief, my argument is, firstly, that a decision can itself be autonomous without promoting the agent’s future or overall autonomy and, second, that even if there is an autonomy-based moral duty to hear relevant information (as scholars such as Rosamond Rhodes argue), we can still have a right that people not force us to hear such information.

In a recent paper, Julian Savulescu and I go further into the details of the Right Not to Know, setting out the scope for a degree of compromise between the two central camps.

Continue reading