Skip to content

Pain for Ethicists #2: Is the Cerebral Cortex Required for Pain? (Video)

Here’s my presentation from the UQAM 2018 Summer School in Animal Cognition organised by Stevan Harnad:

I also highly recommend Jonathan Birch’s talk on Animal Sentience and the Precautionary Principle and Lars Chittka’s amazing presentation about the minds of bees.

Thanks again to EA Grants for supporting this research as well as my home institutions Uehiro & WEH. And thanks to Mélissa Desrochers for the video.

You can find the first Pain for Ethicists post here.

Adam Shriver is a Research Fellow at the Oxford Uehiro Centre for Practical Ethics and the Wellcome Centre for Ethics and Humanities.

Follow him on Twitter.

Share on

1 Comment on this post

  1. I have viewed Adam Shriver’s presentation, but I must confess I am still somewhat puzzled as to why he or we should believe that human lesion studies should tell us anything about nonhuman pain, particularly when the concept of “pain” seems still to be understood in terms of pain as we human beings experience it, and still seems to carry a strong linkage with human-centered nations of “sentience,” “awareness,” and the like. This approach, I believe, would correspond to what Ben Mylius describes as descriptive anthropocentrism by extrapolation, “present in paradigms that purport to study phenomena in the world in general on the basis of a version of a concept developed via the study of human beings in particular—like Saussure’s semiotics or Aristotle’s studies of rationality” (Environmental Philosophy 2018, doi: 10.5840/envirophil20184564). However, I do applaud what seems to be a sincere desire to escape from the anthropocentric paradigm (which can indeed be quite constraining) and grapple with the moral issue of the extent to which nonhuman organisms experience something that is, for them, the equivalent of what we call pain.

    Since nonhumans are unable to provide us with a verbal report of their subjective experiences, Shriver expresses an interest in discovering behavioral markers for pain, but it seems he already ticked off the key marker when, early on in his talk, he linked the measure of unpleasantness to “how motivated [one] is to escape.” Do we not already witness attempted escape behavior aplenty from nonhuman animals when in situations in which we would already intuitively expect them to be experiencing pain? Of course, going “down” the phylogenetic tree, withdrawal and escape behavior is likely to be written off as merely an indication of “nocioception”–but why make this move? I see Wikipedia holds that “nociception triggers a variety of physiological and behavioral responses and usually results in a subjective experience of pain _in sentient beings_”–and of course there’s that word “sentient” again, urging us to draw some line between beings that can have “subjective” experiences and those that can not. How could we humans possibly determine something like this, should such a threshold exist at all? But more importantly, why should we assume that there is some threshold cut-off for awareness, when awareness of both one’s external environment and one’s internal state would seem to be crucial to an organism’s survival, just as perception of pain has been important for human survival, at least up until the time those with pain asymbolia became socially supportable?

    What would seem far more plausible, to me, is to take a life-centered view and realize that all living organisms, to have survived their long evolutionary journey thus far, must have ways of sensing both inside and outside, and that, since they do “struggle” to survive (their teleology is, of course, for each one to try to stay alive for as long as possible, and pass life on, if possible), things that potentially pose threats to the ongoing life of the organism–things which will be generally disruptive of its normal internal state–will be met with a motivation to escape from those things, whatever they might be. Just what that internal state may be “like” may have to do not only with with how many synapses there are in the neuronal circuitry (if they have neurons at all–much intercellular and interorganismic communication goes on without neurons being involved) between the sensory input and the motor (if they are animals) output, but the diversity of configurations is immense, and to assume that only for configurations bearing a certain resemblance to ours is there “something to be like,” seems astonishingly narcissistic, and blinds us to the richness of the full spectrum of nonhuman life. Why not go with an evolutionarily enlightened understanding of what it means to be a form of life, each individual defending its life in its own way, and recognize each life as morally significant, taking responsibility for how we will interact with that life, instead of pretending that some lives simply aren’t significant at all? Causing pain is one way of 9nteracting, of course, as is ending that life altogether. But there are many more human interventions that will induce the motivation to escape–limitation of movement, deprivation of the company of conspecifics in the case of a social animal, removal from ecological surroundings to which one is adapted–and all carry moral responsibility, should we choose to see our human role in such interventions carries responsibility instead of denying that it does. Surely searching for some arbitrary cut-off regarding either kinds of organisms or kinds of interventions is a way of making us feel more comfortable about current norms of human behavior, not a good-faith attempt to assess the rightness of our collective actions. For a better understanding of various kinds of cognition, in humans and other animals, I’d recommend looking into what Frans de Waal calls _evolutionary cognition_ (2016).

    Human lesion studies can tell us quite a bit about our own perceptions of pain as well as many other kinds of human cognition, however, or, even less invasively, fMRI studies that light up neural networks that connect certain brain regions. I do find it interesting that pain perception in humans appears to be dissociable, as pointed out, into somatosensory and affective pathways, but I see no reason to project this discovery onto nonhuman organisms, which might have very different sorts of pain circuitry–just as where we once thought that “bird brains” couldn’t be very smart because they lacked mammalian-type layered cerebral cortex, we now know that birds have similar types of cortical neurons arranged in a different type of cortical organization, with the neurons concentrated into nuclei (see http://www.pnas.org/content/109/42/16974)–and certainly not to try to deny the experience of “pain” to them if they happen to lack one or another of the structures that go into our human pain circuitry. There are also “moral neural networks” being explored via brain imaging studies in humans, which interestingly enough also seem to be dissociable, with the ventromedial prefrontal cortex (vmPFC) seemingly more active when attending to “easy” moral dilemmas with culturally accepted choices and the temporoparietal junction (TPJ) playing a larger role in more difficult decisions that require the recruitment of empathy and “theory of mind” pathways (see https://www.ncbi.nlm.nih.gov/pubmed/23322890). What this might mean for our own moral theorizing remains to be explored by philosophers.

Comments are closed.