In recent studies, neuroscientists have been able to use brain imaging to reliably predict inner states such as lying or intention. In a groundbreaking study published in a recent issue of Nature (and briefly summarised here, here and here), Kay and his colleagues used functional magnetic resonance imaging (fMRI) to make predictions about what subjects were seeing. Using a complex mathematical model based on decades of research into the human visual cortex, measured brain activity to estimate which grayscale natural image the subject was seeing at a given point in time. This goes beyond prior attempts at ‘brain reading’ in that the analysis did not merely use simple artificial stimuli or generic statistical signal-processing methods to identify neural patterns but employed data about the early stages of visual processing to develop a model that was then able to accurately predict which of a large number of novel and complex natural images was seen by the subject.
Discussion of such advances invariably make remarks about ‘new threats to privacy’, or as one author of the study puts it, "down the road, ethical and privacy issues must be dealt with." Of course this major scientific advance does not, in itself, raise any special ethical issues. We are still far from the day where ‘brain reading’ technology will allow others to find out what we are thinking or feeling even in rudimentary ways. But although this day is still far ahead, the speed at which the field is advancing suggests it is not as far as many think. So it is perhaps worth our while to start to think about these ‘threats to privacy’. A world in brain reading was possible would be a very different world than ours, but as with other examples of radical technological change, it is not easy to articulate the sense in which such technology would introduce entirely new issues about privacy. Is peering into your mind really different from reading your diary, copying your emails or tapping your phone calls? Aren’t there already plenty of familiar humdrum ways in which we can find out what people think or feel? (We certainly don’t need fMRI to have a pretty good idea about what another person is seeing!)
These are the kinds of questions that still need to be answered. But thinking in this way is sometime the wrong way to approach the ethical significance of new technology. Often, the significance of such technology lies not in the way that it introduces moral problems that are genuinely true, but in the way that, over time, it changes our form of life and, consequently, what we take to matter–it may lie, we might say, not in morality but in ethics. So the important question might not be ‘Should we forbid using brain scanners to peer into another’s mind?’ but a question that is far harder to answer using the tools of traditional ethics: ‘What would the use of such scanners mean to human life?’
You stated:
” what we take to matter–it may lie, we might say, not in morality but in ethics. So the important question might not be ‘Should we forbid using brain scanners to peer into another’s mind?’ but a question that is far harder to answer using the tools of traditional ethics: ‘What would the use of such scanners mean to human life?'”
Is your practical distinction between ethics and morality the distinction between “what should I do, given the possible consequences of this action?” and “what should the social system forbid, given the possible consequences?” ? how does this practical difference work and what are the (practical) consequences of calling the question one of morality rather than ethics?
Guy, thinking for a moment about criminal legal proceedings: no man should be forced to incriminate himself out of his own mouth. For this reason criminal suspects in the U.K. used to have a right to silence: we used to permit suspects to refuse to answer questions without such refusal influencing their defence (in the U.S. this right is still enshrined in law). Already we have some neuroscientific ways of detecting whether a scene is familiar to a person and so we could use that to determine whether a crime scene was familiar to them. I expect that governments will be strongly tempted to force suspects to undergo brain scans to reveal this kind of evidence. In this case I think there is a clear ethical distinction between a mandated search of your house to collect evidence against you and a mandated search of your brain for the same purpose. It would be a further step in an oppressive direction, similar to the step taken when the U.K. gave up the right to silence.
One might think that the right to silence is grounded in a certain respect for the integrity of a person. How that integrity is to be respected may vary from circumstance to circumstance, so it may be that the transgression of brain reading is not essentially different from the trangression of reading private letters. Nevertheless, when it comes to letters, if we want to eliminate all risk of revelation we need not write. When it comes to our brains we have no choice about whether our experience is written there, and it may be this that motivates our inclination to think brain reading a greater invasion of privacy. So for that reason brains may deserve greater privacy protections than letters.
Dennis (and apologies for the delay), the distinction you mention is certainly relevant, but I had a different one in mind. One way in which ‘morality’ and ‘ethics’ are sometimes distinction (and which is made in the writings of the late British philosopher Bernard Williams) is (roughly) between questions that have to do with what is morally right or forbidden to do — both your suggestions would fall under this — and questions about how we ought to live, what would make our lives flourish or go better. I was suggesting that radical technological change may, over time, change the way we think about the latter kind of question, and that this might be ultimately more important than more immediate moral questions (though as our ‘form of life’ changes, new and unexpected moral issues will also arise).
Nick, thanks for these remarks. I was certainly not suggesting that brain imaging won’t raise new moral questions. You give some suggestions, and I’ve elsewhere written things along similar lines. The extensive use of the internet also raises new issues of privacy, issues that need to be dealt with. But I think that the way the internet is transforming the way we live — changes difficult to pin point or articulate very precisely — might be ethically more important than these moral questions.
The question of the right to silence is an interesting one. I suppose that strictly speaking, finding out using brain imaging doesn’t violate this right, though of course this wouldn’t make anyone happy. I suppose our intuitions here might be driven not so much by a supposed right to silence, but by the distinction between our right to our property, and what some call ‘self-ownership’, something we take to be far stronger.
To what extent are our intuitions here influenced by the physical contant that scanning a brain requires? Would they be as strong if we found out a suspect’s thought through, say, telepathy?
I think you’re right that our intuitions probably *are* influenced by the physicality of brain scanning. But insofar as I was thinking that the ground of concern was the integrity of a person, which is also bound up with self ownership, telepathy would be no different.
On the ethical side, brain reading would offer new ways of proving commitment and new ways of proving sincerity, and for those reasons would perhaps create new ways of relating. A crude example: it might allow the formation of contracts for mutual benefit not currently worth making due to party uncertainty (risk of default with low ability to enforce).
Guy, thanks for the interesting post.
The German Supreme Court (Bundesgerichtshof) judged in 1954 that the use of a polygraph violates litigation law and the constitution because “the unconscious of the interviewed answers without any chance of voluntary control”. They saw a violation of a paragraph about the right to silence as well as the dignity of man (Article 1,1 of the German constitution) and personality rights. Admittedly, they were a little naive about the prospects of polygraphy at that time (there is a much better and empirically informed decision from 1998), yet it is interesting to ponder whether this reasoning may apply to some forms of “brain reading”.
Can you give any example of a way such technology (brain scans or the Internet) changes the human way of life which may be more relevant ethically than fears of “brain reading”?
Thanks Stephan,
This is an extremely intriguing court decision. The bit about the unconscious would be relevant only for some uses of ‘brain reading’, whereas the weight placed on consent needs further explanation–of course consent is not given such weight with respect to other important information about oneself. There is the interesting question, raised earlier here by Nick Shackel, of the relation between such privacy protection and the right to remain silent. I am not yet sure about this.
My (admittedly vague) point in the post wasn’t that brain reading wouldn’t have ethical implications, but that in asking about such implications we would do better if we don’t focus exclusively on the (in itself important) straightforward question of moral wrongness (or legal permissibility). The German court decision would leave space for people to consent to ‘brain reading’. And the possibility of bypassing behaviour as guide to people’s inner states (assuing the use of such devices is not enormously cumbersome and costly as it is at the moment) can have numerous ramifications on the way people live — most clearly with respect to issues of trust. It might be that, even if brain reading was morally permissible (at least in certain circumstance), these ramifications would ultimately make it a very bad things. (As an analogy, the most interesting questions about the internet are not whether users should enjoy this or that privacy protection.)
Comments are closed.