Skip to content

neuroscience

Regulating The Untapped Trove Of Brain Data

Written by Stephen Rainey and Christoph Bublitz

Increasing use of brain data, either from research contexts, medical device use, or in the growing consumer brain-tech sector raises privacy concerns. Some already call for international regulation, especially as consumer neurotech is about to enter the market more widely. In this post, we wish to look at the regulation of brain data under the GDPR and suggest a modified understanding to provide better protection of such data.

In medicine, the use of brain-reading devices is increasing, e.g. Brain-Computer-Interfaces that afford communication, control of neural or motor prostheses. But there is also a range of non-medical applications devices in development, for applications from gaming to the workplace.

Currently marketed ones, e.g. by Emotiv, Neurosky, are not yet widespread, which might be owing to a lack of apps or issues with ease of use, or perhaps just a lack of perceived need. However, various tech companies have announced their entrance to the field, and have invested significant sums. Kernel, a three year old multi-million dollar company based in Los Angeles, wants to ‘hack the human brain’. More recently, they are joined by Facebook, who want to develop a means of controlling devices directly with data derived from the brain (to be developed by their not-at-all-sinister sounding ‘Building 8’ group). Meanwhile, Elon Musk’s ‘Neuralink’ is a venture which aims to ‘merge the brain with AI’ by means of a ‘wizard hat for the brain’. Whatever that means, it’s likely to be based in recording and stimulating the brain.

Read More »Regulating The Untapped Trove Of Brain Data

Better Living Through Neurotechnology

Written by Stephen Rainey

If ‘neurotechnology’ isn’t a glamour area for researchers yet, it’s not far off. Technologies centred upon reading the brain are rapidly being developed. Among the claims made of such neurotechnologies are that some can provide special access to normally hidden representations of consciousness. Through recording, processing, and making operational brain signals we are promised greater understanding of our own brain processes. Since every conscious process is thought to be enacted, or subserved, or realised by a neural process, we get greater understanding of our consciousness.

Besides understanding, these technologies provide opportunities for cognitive optimisation and enhancement too. By getting a handle on our obscure cognitive processes, we can get the chance to manipulate them. By representing our own consciousness to ourselves, through a neurofeedback device for instance, we can try to monitor and alter the processes we witness, changing our minds in a very literal sense.

This looks like some kind of technological mind-reading, and perhaps too good to be true. Is neurotechnology overclaiming its prospects? Maybe more pressingly, is it understating its difficulties?Read More »Better Living Through Neurotechnology

Pain for Ethicists: What is the Affective Dimension of Pain?

This is my first post in a series highlighting current pain science that is relevant to philosophers writing about well-being and ethics.  My work on this topic has been supported by the W. Maurice Young Centre for Applied Ethics, the Oxford Uehiro Centre for Practical Ethics, and the Wellcome Centre for Ethics and Humanities, as well as a generous grant from Effective Altruism Grants

There have been numerous published cases in the scientific literature of patients who, for various reasons, report feeling pain but not finding the pain unpleasant. As Daniel Dennett noted in his seminal paper “Why You Can’t Make A Computer That Feels Pain,” these reports seem to be at odds with some of our most basic intuitions about pain, in particular the conjunction of our intuitions that ‘‘a pain is something we mind’’ and ‘‘we know when we are having a pain.’’ Dennett was discussing the effects of morphine, but similar dissociations have been reported in patients who undergo cingulotomies to treat terminal cancer pain and in extremely rare cases called “pain asymbolia” involving damage to the insula cortex.Read More »Pain for Ethicists: What is the Affective Dimension of Pain?

Neuroblame?

Written by Stephen Rainey

Brain-machine interfaces (BMIs), or brain-computer interfaces (BCIs), are technologies controlled directly by the brain. They are increasingly well known in terms of therapeutic contexts. We have probably all seen the remarkable advances in prosthetic limbs that can be controlled directly by the brain. Brain-controlled legs, arms, and hands allow natural-like mobility to be restored where limbs had been lost. Neuroprosthetic devices connected directly to the brain allow communication to be restored in cases where linguistic ability is impaired or missing.

It is often said that such devices are controlled ‘by thoughts’. This isn’t strictly true, as it is the brain that the devices read, not the mind. In a sense, unnatural patterns of neural activity must be realised to trigger and control devices. Producing the patterns is a learned behaviour – the brain is put to use by the device owner in order to operate it. This distinction between thought-reading and brain-reading might have important consequences for some conceivable scenarios. To think these through, we’ll indulge in a little bit of ‘science fiction prototyping’.

Read More »Neuroblame?

Functional neo-Aristotelianism as a way to preserve moral agency: A response to Dr William Casebeer’s lecture: The Neuroscience of Moral Agency

Written by Dr Anibal Monasterio Astobiza

Audio File of Dr Casebeer’s talk is available here: http://media.philosophy.ox.ac.uk/uehiro/HT17_Casebeer.mp3

 

Dr. William Casebeer has an unusual, but nonetheless very interesting, professional career. He retired from active duty as a US Air Force Lieutenant Colonel and intelligence analyst. He obtained his PhD in Cognitive Science and Philosophy from University of California, San Diego, under the guidance and inspiration of Patricia and Paul Churchland, served as a Program Manager at the Defense Advanced Research Projects Agency from 2010-14 in the Defense Sciences Office and helped to established DARPA’s neuroethics program. Nowadays, Dr. William Casebeer is a Research Area Manager in Human Systems and Autonomy for Lockheed Martin’s Advanced Technology Laboratories. As I said, not the conventional path for a well known researcher with very prominent contributions in neuroethics and moral evolution. His book Natural Ethical Facts: Evolution, Connectionism, and Moral Cognition (MIT Press) presented a functional and neo-Aristotelian account of morality with a clever argument trying to solve G. E. Moore´s naturalistic fallacy: according to Casebeer it is possible to reduce what is good, or in other words morality, to natural facts.

Read More »Functional neo-Aristotelianism as a way to preserve moral agency: A response to Dr William Casebeer’s lecture: The Neuroscience of Moral Agency

“The medicalization of love” – podcast interview

Just out today is a podcast interview for Smart Drug Smarts between host Jesse Lawler and interviewee Brian D. Earp on “The Medicalization of Love” (title taken from a recent paper with Anders Sandberg and Julian Savulescu, available from the Cambridge Quarterly of Healthcare Ethics, here). Below is the abstract and link to the interview: Abstract What is love? A… Read More »“The medicalization of love” – podcast interview

Guest Post: Must we throw out the brain with the bathwater? Marc Lewis on addiction

  • by

Written by Anke Snoek

Macquarie University

When neuroscience started to mingle into the debate on addiction and self-control, people aimed to use these insights to cause a paradigm shift in how we judge people struggling with addictions. People with addictions are not morally despicable or weak-willed, they end up addicted because drugs influence the brain in a certain way. Anyone with a brain can become addicted, regardless their morals. The hope was that this realisation would reduce the stigma that surrounds addiction. Unfortunately, the hoped for paradigm shift didn’t really happen, because most people interpreted this message as: people with addictions have deviant brains, and this view provides a reason to stigmatise them in a different way.Read More »Guest Post: Must we throw out the brain with the bathwater? Marc Lewis on addiction

Guest Post: What’s wrong with obesity (and addiction)?

Written by Anke Snoek

Macquarie University

Many of us experience failure of self-control once in a while. These failures are often harmless, and may involve alcohol or food. Because we have experiences with these failures of self-control, we think that something similar is going on in cases of addiction or when people who can’t control their eating on a regular basis. Because we fail to exercise willpower once in a while over food or alcohol, we think that people who regularly fail to control their eating or substance use, must be weak-willed. Just control yourself.Read More »Guest Post: What’s wrong with obesity (and addiction)?

Stopping the innocent from pleading guilty

  • by

Written by Dr John Danaher.

Dr Danaher is a Lecturer in Law at NUI Galway. His research interests include neuroscience and law, human enhancement, and the ethics of artificial intelligence.

A version of this post was previously published here.

Somebody recently sent me a link to an article by Jed Radoff entitled “Why Innocent People Plead Guilty”. Radoff’s article is an indictment of the plea-bargaining system currently in operation in the US. Unsurprisingly given its title, it argues that the current system of plea bargaining encourages innocent people to plead guilty, and that something must be done to prevent this from happening.

I recently published a paper addressing the same problem. The gist of its argument is that I think that it may be possible to use a certain type of brain-based lie detection — the P300 Concealed Information Test (P300 CIT) — to rectify some of the problems inherent in systems of plea bargaining. The word “possible” is important here. I don’t believe that the technology is currently ready to be used in this way – I think further field testing needs to take place – but I don’t think the technology is as far away as some people might believe either.

What I find interesting is that, despite this, there is considerable resistance to the use of the P300 CIT in academic and legal circles. Some of that resistance stems from unwarranted fealty to the status quo, and some stems from legitimate concerns about potential abuses of the technology (miscarriages of justice etc.). I try to overcome some of this resistance by suggesting that the P300 CIT might be better than other proposed methods for resolving existing abuses of power within the system. Hence my focus on plea-bargaining and the innocence problem.

Anyway, in what follows I’ll try to give a basic outline of my argument. As ever, for the detail, you’ll have to read the original paper.Read More »Stopping the innocent from pleading guilty