Skip to content

Biomedical Science

Brain Cells, Slime Mould, and Sentience Semantics

Recent media reports have highlighted a study suggesting that so-called “lab grown brain cells” can “play the video game ‘Pong’”. Whilst the researchers have described the system as ‘sentient’, others have maintained that we should use the term ”thinking system” to describe the system that the researchers created.

Does it matter whether we describe this as a thinking system, or a sentient one?

Read More »Brain Cells, Slime Mould, and Sentience Semantics

Three Observations about Justifying AI

Written by:  Anantharaman Muralidharan, G Owen Schaefer, Julian Savulescu
Cross-posted with the Journal of Medical Ethics blog

Consider the following kind of medical AI. It consists of 2 parts. The first part consists of a core deep machine learning algorithm. These blackbox algorithms may be more accurate than human judgment or interpretable algorithms, but are notoriously opaque in terms of telling us on what basis the decision was made. The second part consists of an algorithm that generates a post-hoc medical justification for the core algorithm. Algorithms like this are already available for visual classification. When the primary algorithm identifies a given bird as a Western Grebe, the secondary algorithm provides a justification for this decision: “because the bird has a long white neck, pointy yellow beak and red eyes”. The justification goes beyond just a description of the provided image or a definition of the bird in question, and is able to provide a justification that links the information provided in the image to the features that distinguish the bird. The justification is also sufficiently fine grained as to account for why the bird in the picture is not a similar bird like the Laysan Albatross. It is not hard to imagine that such an algorithm would soon be available for medical decisions if not already so. Let us call this type of AI “justifying AI” to distinguish it from algorithms which try, to some degree or other, to wear their inner workings on their sleeves.

Possibly, it might turn out that the medical justification given by the justifying AI sounds like pure nonsense. Rich Caruana et al present a  case whereby asthmatics were deemed less at risk of dying by pneumonia. As a result, it prescribed less aggressive treatments for asthmatics who contracted pneumonia. The key mistake the primary algorithm made was that it failed to account for the fact that asthmatics who contracted pneumonia had better outcomes only because they tended to receive more aggressive treatment in the first place. Even though the algorithm was more accurate on average, it was systematically mistaken about one subgroup. When incidents like these occur, one option here is to disregard the primary AI’s recommendation. The rationale here is that we could hope to do better than by relying on the blackbox alone by intervening in cases where the blackbox gives an implausible recommendation/prediction. The aim of having justifying AI is to make it easier to identify when the primary AI is misfiring. After all, we can expect trained physicians to recognise a good medical justification when they see one and likewise recognise bad justifications. The thought here is that the secondary algorithm generating a bad justification is good evidence that the primary AI has misfired.

The worry here is that our existing medical knowledge is notoriously incomplete in places. It is to be expected that there will be cases where the optimal decision vis a vis patient welfare does not have a plausible medical justification at least based on our current medical knowledge. For instance, Lithium is used as a mood stabilizer but the reason why this works is poorly understood. This means that ignoring the blackbox whenever a plausible justification in terms of our current medical knowledge is unavailable will tend to lead to less optimal decisions. Below are three observations that we might make about this type of justifying AI.

Read More »Three Observations about Justifying AI

Cognitive snobbery: The Unacceptable Bias in Favour of the Conscious

There are many corrosive forms of discrimination. But one of the most dangerous is the bias in favour of consciousness, and the consequent denigration of the unconscious.

We see it everywhere. It’s not surprising. For when we’re unreflective – which is most of the time – we tend to suppose that we are our conscious selves, and that the unconscious is a lower, cruder part of us; a seething atavistic sea full of monsters, from which we have mercifully crawled, making our way ultimately to the sunlit uplands of the neocortex, there to gaze gratefully and dismissively back at what we once were.  It’s a picture encoded in our self-congratulatory language: ‘Higher cognitive function’; ‘She’s not to be blamed: she wasn’t fully conscious of the consequences.’: ‘In the Enlightenment we struck off the shackles of superstition and freed our minds to roam.’Read More »Cognitive snobbery: The Unacceptable Bias in Favour of the Conscious

Philosophical Fiddling While the World Burns

By Charles Foster

An unprecedented editorial has just appeared in many health journals across the world. It relates to climate change.

The authors say that they are ‘united in recognising that only fundamental and equitable changes to societies will reverse our current trajectory.’

Climate change, they agree, is the major threat to public health. Here is an excerpt: there will be nothing surprising here:

‘The risks to health of increases above 1.5°C are now well established. Indeed, no temperature rise is “safe.” In the past 20 years, heat related mortality among people aged over 65 has increased by more than 50%.Hi gher temperatures have brought increased dehydration and renal function loss, dermatological malignancies, tropical infections, adverse mental health outcomes, pregnancy complications, allergies, and cardiovascular and pulmonary morbidity and mortality. Harms disproportionately affect the most vulnerable, including children, older populations, ethnic minorities, poorer communities, and those with underlying health problems.’Read More »Philosophical Fiddling While the World Burns

Is a Publication Boycott of Chinese Science a Justifiable Response to Human Rights Violations Perpetrated by Chinese Doctors and Scientists?

By Doug McConnell

Recently the editor-in-chief of the Annals of Human Genetics, Prof David Curtis, resigned from his position, in part, because the journal’s publisher, Wiley, refused to publish a letter he co-authored with Thomas Schulze, Yves Moreau, and Thomas Wenzel. In that letter, they argue in favour of a boycott on Chinese medical and scientific publications as a response to the serious human rights violations happening in China. Several other leading journals, the Lancet, the BMJ and JAMA have also refused to publish the letter claiming that a boycott against China would be unfair and counterproductive.

This raises two separate ethical issues: 1. Should journals refuse to publish a letter arguing in favour of a boycott on Chinese medical and scientific publications? 2. Should journals actually establish a boycott on Chinese medical and scientific publications?Read More »Is a Publication Boycott of Chinese Science a Justifiable Response to Human Rights Violations Perpetrated by Chinese Doctors and Scientists?

Cross-Post: The Moral Status of Human-Monkey Chimeras

Written by Julian Savulescu and Julian Koplin 

This article was first published on Pursuit. Read the original article.

The 1968 classic Planet of the Apes tells the story of the Earth after a nuclear war destroys human civilisation. When three astronauts return to our planet after a long space voyage, they discover that humans have lost the power of verbal communication and live much like apes currently do.

Meanwhile, non-human primates have evolved speech and other human-like abilities, and are now running the earth with little regard for human life.

The astronaut George Taylor, played by Charlton Heston, is rendered temporarily mute when he is shot in the throat and captured. In one scene he is brought before the Apes, as he appears more intelligent than other humans.

He regains the power of speech, and his first words are: Take your stinking paws off me, you damned dirty ape.”

Planet of the Apes may be fiction, but this month the world’s first human-monkey lifeforms were created by Juan Carlos Belmonte at the Salk Institute for Biological Studies in the US, using private funding. Professor Belmonte and his group injected stem cells from the skin of a human foetus into a monkey embryo.

This part-human lifeform is called a chimera.

If implanted into a monkey uterus, the chimera could theoretically develop into a live-born animal that has cells from both a monkey and a human.

While it has been possible to make chimeras for more than 20 years using a different technique that involves fusing the embryos of two animals together, this technique has not been used in humans. It has been used to create novel animals like the geep – a fusion of a sheep and goat embryo.

Professor Belmonte used a different technique– called “blastocyst complementation” – which is more refined. It enables greater control over the number of human cells in the chimera.

But why is this research being done?

Read More »Cross-Post: The Moral Status of Human-Monkey Chimeras

Cross-Post: Self-experimentation with vaccines

By Jonathan Pugh, Dominic Wilkinson and Julian Savulescu.

This is a crosspost from the Journal of Medical Ethics Blog.

This is an output of the UKRI Pandemic Ethics Accelerator project.

 

A group of citizen scientists has launched a non-profit, non-commercial organisation named ‘RaDVaC’, which aims to rapidly develop, produce, and self-administer an intranasally delivered COVID-19 vaccine. As an open source project, a white paper detailing RaDVaC’s vaccine rationale, design, materials, protocols, and testing is freely available online. This information can be used by others to manufacture and self-administer their own vaccines, using commercially available materials and equipment.

Self-experimentation in science is not new; indeed, the initial development of some vaccines depended on self-experimentation. Historically, self-experimentation has led to valuable discoveries. Barry Marshall famously shared the Nobel Prize in 2005 for his work on the role of the bacterium Helicobacter pylori, and its role in gastritis –this research involved a self-experiment in 1984 that involved Marshall drinking a prepared mixture containing the bacteria, causing him to develop acute gastritis. This research, which shocked his colleagues at the time, eventually led to a fundamental change in the understanding of gastric ulcers, and they are now routinely treated with antibiotics. Today, self-experimentation is having something of a renaissance in the so-called bio-hacking community. But is self-experimentation to develop and test vaccinations ethical in the present pandemic? In this post we outline two arguments that might be invoked to defend such self-experimentation, and suggest that they are each subject to significant limitations.Read More »Cross-Post: Self-experimentation with vaccines

The Duty To Ignore Covid-19

By Charles Foster

This is a plea for a self-denying ordinance on the part of philosophers. Ignore Covid-19. It was important that you said what you have said about it, but the job is done. There is nothing more to say. And there are great dangers in continuing to comment. It gives the impression that there is only one issue in the world. But there are many others, and they need your attention. Just as cancer patients were left untreated because Covid closed hospitals, so important philosophical problems are left unaddressed, or viewed only through the distorting lens of Covid.Read More »The Duty To Ignore Covid-19

We’re All Vitalists Now

By Charles Foster

It has been a terrible few months for moral philosophers – and for utilitarians in particular. Their relevance to public discourse has never been greater, but never have their analyses been so humiliatingly sidelined by policy makers across the world. The world’s governments are all, it seems, ruled by a rather crude vitalism. Livelihoods and freedoms give way easily to a statistically small risk of individual death.

That might or might not be the morally right result. I’m not considering here the appropriateness of any government measures, and simply note that whatever one says about the UK Government’s response, it has been supremely successful in generating fear. Presumably that was its intention. The fear in the eyes above the masks is mainly an atavistic terror of personal extinction – a fear unmitigated by rational risk assessment. There is also a genuine fear for others (and the crisis has shown humans at their most splendidly altruistic and communitarian as well). But we really don’t have much ballast.

The fear is likely to endure long after the virus itself has receded. Even if we eventually pluck up the courage to hug our friends or go to the theatre, the fear has shown us what we’re really like, and the unflattering picture will be hard to forget.

I wonder what this new view of ourselves will mean for some of the big debates in ethics and law? The obvious examples are euthanasia and assisted suicide.Read More »We’re All Vitalists Now

Regulating The Untapped Trove Of Brain Data

Written by Stephen Rainey and Christoph Bublitz

Increasing use of brain data, either from research contexts, medical device use, or in the growing consumer brain-tech sector raises privacy concerns. Some already call for international regulation, especially as consumer neurotech is about to enter the market more widely. In this post, we wish to look at the regulation of brain data under the GDPR and suggest a modified understanding to provide better protection of such data.

In medicine, the use of brain-reading devices is increasing, e.g. Brain-Computer-Interfaces that afford communication, control of neural or motor prostheses. But there is also a range of non-medical applications devices in development, for applications from gaming to the workplace.

Currently marketed ones, e.g. by Emotiv, Neurosky, are not yet widespread, which might be owing to a lack of apps or issues with ease of use, or perhaps just a lack of perceived need. However, various tech companies have announced their entrance to the field, and have invested significant sums. Kernel, a three year old multi-million dollar company based in Los Angeles, wants to ‘hack the human brain’. More recently, they are joined by Facebook, who want to develop a means of controlling devices directly with data derived from the brain (to be developed by their not-at-all-sinister sounding ‘Building 8’ group). Meanwhile, Elon Musk’s ‘Neuralink’ is a venture which aims to ‘merge the brain with AI’ by means of a ‘wizard hat for the brain’. Whatever that means, it’s likely to be based in recording and stimulating the brain.

Read More »Regulating The Untapped Trove Of Brain Data