Skip to content

Brain Cells, Slime Mould, and Sentience Semantics

Recent media reports have highlighted a study suggesting that so-called “lab grown brain cells” can “play the video game ‘Pong’”. Whilst the researchers have described the system as ‘sentient’, others have maintained that we should use the term ”thinking system” to describe the system that the researchers created.

Does it matter whether we describe this as a thinking system, or a sentient one?


A Brief Overview of the Study

According to the study, researchers developed in vitro neural networks from human or rodent origins, and integrated them with a computing system (via a high density array of electrodes). Following electrical stimulation and recording, this allowed the team to embed this system into the ‘game-world’ of something like the classic arcade game Pong (a very basic representation of table-tennis). The study results suggest that the system showed evidence of apparent learning in the game.

This is a remarkable finding, but is this system thereby sentient? According to media reports, the lead author, Brett Kagan suggests that

“We could find no better term to describe the device. . . It is able to take in information from an external source, process it and then respond to it in real time.”

Of course, this description could also be readily applied to many artificial intelligence systems – witness recent debates about whether it is appropriate to describe Google LaMDA as sentient. However, it is also illuminating to consider that it is an apt description of some very basic non-synthetic life-forms.


Phenomenal Consciousness and Moral Status

Consider slime mould. Sime mould is a single-celled amoeba that looks like a mass of yellow sponge; if you found it in your garden, you might be inclined to fetch a spade and get rid of it.

Yet, scientists have discovered that slime moulds are capable of remarkable feats. They can solve complex spatial problems despite lacking a brain. When tasked with finding ‘food’ in a space replicating large urban areas in the real world on a miniature scale, researchers found that slime mould did not do so in a random manner. Instead, it essentially recreated the transport networks that actually exist in those real-world places.

Slime moulds challenge the assumption that intelligence requires a brain, but should this research change our thinking about ethics? Do these findings suggest that we have a moral reason to stop slime mould from being dug up by fastidious gardeners?

Answering this question requires us to have a view about the sorts of capacities that warrant moral protection, or on what grounds ‘moral status’.

One reason that a gardener may be unmoved by the plight of the slime mould is that, for all of its impressive navigational abilities, it is highly doubtful that it has phenomenal consciousness. This is the ability to ‘feel’ things, to subjectively experience what it is like to be in a particular mental state. The same is true of the Pong-playing neural network; indeed, researchers on the study are actively working with bioethicists to ensure they do not accidentally create a conscious brain.

Phenomenal consciousness is a crucial ability, because only beings with phenomenal consciousness are able to experience suffering. This is important because all being capable of experiencing suffering have a very strong interest in not doing so, an interest that plausibly warrants strong moral consideration. This is an idea Jeremy Bentham captured when he wrote:

The question is not, Can they reason?, nor Can they talk? but, Can they suffer? Why should the law refuse its protection to any sensitive being?

Phenomenal consciousness is therefore plausibly sufficient for some degree of moral status. Some philosophers would advance the stronger claim that it may even be necessary; perhaps, if something lacks phenomenal consciousness, it cannot have the sort of interests that warrant moral protection.


What’s in a Name?

Phenomenal consciousness and the ability to process information whilst interacting with an environment are quite different abilities. One can occur in the absence of the other.

Importantly, there is a considerable philosophical tradition suggesting a close relationship between moral status and phenomenal consciousness. It is far less clear that the abilities evidenced by slime moulds and Pong-playing neural networks can alone ground moral status.

The problem is that ‘sentience’ is sometimes employed as an umbrella term to cover all of these different kinds of ability. This is unfortunate because it serves to obscure whether and where significant moral issues arise. When sentience is used to connote phenomenal consciousness of the sort that is sufficient for the experience of suffering, then establishing that something is sentient raises important moral questions about the strength of our reasons to prevent that being’s suffering. When sentience is used as a short-hand for other abilities, these particular moral questions do not arise.

Ultimately, it is the concepts rather than the labels that matter; however, it is hard to deny that, for many, the label ‘sentience’ connotes a substantial degree of phenomenal consciousness. Consider, for example, the appellation of the Animal Welfare (Sentience) Act 2022 passed in the UK this year. This bill affords legal protection to vertebrates, cephalopod molluscs, and decapod crustaceans with a view to preventing suffering; but the Act does not extend to invertebrates (capable of certain forms of information processing), or indeed slime moulds.

The upshot here is that the embedded neural network played Pong, but we have no reason to believe that it enjoyed doing so. That says more about the limits of the system than it does about the merits of the computer game. But it is something that matters morally – and there are good reasons to make sure that the language we use in this area reflects this.


Share on

1 Comment on this post

  1. I think that sentience matters. If slime mould matters, it would have to tell us. Moreover, we would need to get the message and be able to deliver a response that slime mould could comprehend. Anything less is ineffable twaddle. I won’t return to the AI nonsense of a month or two ago. Many investigators and researchers have interests, preferences and motives now. These depend largely upon their disciplinary goals and objectives and those may, or may not, fire neurons or nudge ventricles. Complexity is, uh, contentious. My only regret (just kidding) is it is likely I have been unable to avoid genetically modified food. I am not entirely certain I have not eaten lab cultured meat. How about you?-

Comments are closed.