The Neuroscience of a Life Well-lived: New St Cross Ethics Seminar

Professor Morten Kringelbach (Aarhus and Oxford) recently gave a fascinating New St Cross Ethics Seminar on ‘The Neuroscience of a Life Well-Lived’ (YouTube; mp3). Continue reading

Seminar Recordings: The Neuroscience of a Life Well-Lived

Audio and Video recordings of Professor Morten L. Kringlebach (Aarhus University, Denmark; University of Oxford) online St Cross Seminar (21 January 2021) are now available.

Continue reading

Seminar Recordings: Towards a Plasticity of the Mind – New-ish Ethical Conundrums in Dementia Care, Treatment, and Research

Audio and video recordings of David Lyreskog’s online St Cross Seminar (25 February 2021) are now available.

Continue reading

Love Drugs: The Chemical Future of Relationships

Announcement: Brian Earp and Julian Savulescu’s new book ‘Love Drugs: The Chemical Future of Relationships‘, published by (Stanford University Press) is now available.

Is there a pill for love? What about an “anti-love drug”, to help us get over an ex? This book argues that certain psychoactive substances, including MDMA—the active ingredient in Ecstasy—may help ordinary couples work through relationship difficulties and strengthen their connection. Others may help sever an emotional connection during a breakup. These substances already exist, and they have transformative implications for how we think about love. This book builds a case for conducting research into “love drugs” and “anti-love drugs” and explores their ethical implications for individuals and society. Scandalously, Western medicine tends to ignore the interpersonal effects of drug-based interventions. Why are we still in the dark about the effects of these drugs on romantic partnerships? And how can we overhaul scientific research norms to take relationships more fully into account?

Continue reading

Video Interview: Jesper Ryberg on Neurointerventions, Crime and Punishment

Should neurotechnologies that affect emotional regulation, empathy and moral judgment, be used to prevent offenders from reoffending? Is it morally acceptable to offer more lenient sentences to offenders in return for participation in neuroscientific treatment programs? Or would this amount too coercion? Is it possible to administer neurointerventions as a type of punishment? Is it permissible for physicians to administer neurointerventions to offenders? Is there a risk that the dark history of compulsory brain interventions in offenders will repeat itself? In this interview Dr Katrien Devolder (Oxford), Professor Jesper Ryberg (Roskilde) argues that there are no good in-principle objections to using neurointerventions to prevent crime, BUT (!) that given the way criminal justice systems currently function, we should not currently use these interventions…

Making Ourselves Better

Written by Stephen Rainey

Human beings are sometimes seen as uniquely capable of enacting life plans and controlling our environment. Take technology, for instance; with it we make the world around us yield to our desires in various ways. Communication technologies, and global transport, for example, have the effect of practically shrinking a vast world, making hitherto impossible coordination possible among a global population. This contributes to a view of human-as-maker, or ‘homo faber‘. But taking such a view can risk minimising human interests that ought not to be ignored.

Homo faber is a future-oriented, adaptable, rational animal, whose efforts are aligned with her interests when she creates technology that enables a stable counteraction of natural circumstance. Whereas animals are typically seen to have well adapted responses to their environment, honed through generations of adaptation, human beings appear to have instead a general and adaptable skill that can emancipate them from material, external circumstances. We are bad at running away from danger, for instance, but good at building barriers to obviate the need to run. The protections this general, adaptable skill offer are inherently future-facing: humans seem to seek not to react to, but to control the environment.

Continue reading

Regulating The Untapped Trove Of Brain Data

Written by Stephen Rainey and Christoph Bublitz

Increasing use of brain data, either from research contexts, medical device use, or in the growing consumer brain-tech sector raises privacy concerns. Some already call for international regulation, especially as consumer neurotech is about to enter the market more widely. In this post, we wish to look at the regulation of brain data under the GDPR and suggest a modified understanding to provide better protection of such data.

In medicine, the use of brain-reading devices is increasing, e.g. Brain-Computer-Interfaces that afford communication, control of neural or motor prostheses. But there is also a range of non-medical applications devices in development, for applications from gaming to the workplace.

Currently marketed ones, e.g. by Emotiv, Neurosky, are not yet widespread, which might be owing to a lack of apps or issues with ease of use, or perhaps just a lack of perceived need. However, various tech companies have announced their entrance to the field, and have invested significant sums. Kernel, a three year old multi-million dollar company based in Los Angeles, wants to ‘hack the human brain’. More recently, they are joined by Facebook, who want to develop a means of controlling devices directly with data derived from the brain (to be developed by their not-at-all-sinister sounding ‘Building 8’ group). Meanwhile, Elon Musk’s ‘Neuralink’ is a venture which aims to ‘merge the brain with AI’ by means of a ‘wizard hat for the brain’. Whatever that means, it’s likely to be based in recording and stimulating the brain.

Continue reading

Better Living Through Neurotechnology

Written by Stephen Rainey

If ‘neurotechnology’ isn’t a glamour area for researchers yet, it’s not far off. Technologies centred upon reading the brain are rapidly being developed. Among the claims made of such neurotechnologies are that some can provide special access to normally hidden representations of consciousness. Through recording, processing, and making operational brain signals we are promised greater understanding of our own brain processes. Since every conscious process is thought to be enacted, or subserved, or realised by a neural process, we get greater understanding of our consciousness.

Besides understanding, these technologies provide opportunities for cognitive optimisation and enhancement too. By getting a handle on our obscure cognitive processes, we can get the chance to manipulate them. By representing our own consciousness to ourselves, through a neurofeedback device for instance, we can try to monitor and alter the processes we witness, changing our minds in a very literal sense.

This looks like some kind of technological mind-reading, and perhaps too good to be true. Is neurotechnology overclaiming its prospects? Maybe more pressingly, is it understating its difficulties? Continue reading

Pain for Ethicists: What is the Affective Dimension of Pain?

This is my first post in a series highlighting current pain science that is relevant to philosophers writing about well-being and ethics.  My work on this topic has been supported by the W. Maurice Young Centre for Applied Ethics, the Oxford Uehiro Centre for Practical Ethics, and the Wellcome Centre for Ethics and Humanities, as well as a generous grant from Effective Altruism Grants

There have been numerous published cases in the scientific literature of patients who, for various reasons, report feeling pain but not finding the pain unpleasant. As Daniel Dennett noted in his seminal paper “Why You Can’t Make A Computer That Feels Pain,” these reports seem to be at odds with some of our most basic intuitions about pain, in particular the conjunction of our intuitions that ‘‘a pain is something we mind’’ and ‘‘we know when we are having a pain.’’ Dennett was discussing the effects of morphine, but similar dissociations have been reported in patients who undergo cingulotomies to treat terminal cancer pain and in extremely rare cases called “pain asymbolia” involving damage to the insula cortex. Continue reading


Written by Stephen Rainey

Brain-machine interfaces (BMIs), or brain-computer interfaces (BCIs), are technologies controlled directly by the brain. They are increasingly well known in terms of therapeutic contexts. We have probably all seen the remarkable advances in prosthetic limbs that can be controlled directly by the brain. Brain-controlled legs, arms, and hands allow natural-like mobility to be restored where limbs had been lost. Neuroprosthetic devices connected directly to the brain allow communication to be restored in cases where linguistic ability is impaired or missing.

It is often said that such devices are controlled ‘by thoughts’. This isn’t strictly true, as it is the brain that the devices read, not the mind. In a sense, unnatural patterns of neural activity must be realised to trigger and control devices. Producing the patterns is a learned behaviour – the brain is put to use by the device owner in order to operate it. This distinction between thought-reading and brain-reading might have important consequences for some conceivable scenarios. To think these through, we’ll indulge in a little bit of ‘science fiction prototyping’.

Continue reading