Skip to content

The ethics of mind-reading

Recent developments in neuroimaging have created concerns about the ethics of 'mind-reading'. A technology called functional magnetic resonance imaging (fMRI) has led to significant advances in the ability to determine what someone is thinking by monitoring their brain activity. Early research focused on determining very simple features of a person’s mental state, such as whether or not they were currently looking at a picture of a face. However, new research by John-Dylan Haynes of the Max Planck Institute has gone beyond this, allowing scientists to determine which action the subjects in their trial were intending to perform before they performed it (see a summary, or the paper itself). The task in question was to decide whether to add or subtract the two numbers which would later be shown. After being trained on a number of examples, the system could predict which of the two operations the subject would later perform. Furthermore, a study at Carnegie Mellon University showed that it was possible to determine which word from a given list a subject was thinking of, even if it had not scanned that person’s brain before.

We must be careful to keep these results in perspective: at the moment they are only achievable because the systems constrain the possibilities to a very small space (two possible intentions or sixty possible words). However, it is quite likely that these or related techniques will scale to more possibilities, and a lot can potentially be achieved by looking for things that require only two possibilities. For example, if it can work out whether or not someone is lying, then we could potentially determine a secret number by asking a series of ‘higher or lower’ questions.

In an article for Forbes magazine, Paul Wolpe has suggested that due to fears of government oppression we draw a bright line around the use of these mind-reading technologies: 

‘The skull should be designated as a domain of absolute privacy. No one should be able to probe an individual’s mind against their will. … We should forgo the use of the technology under coercive circumstances even though using it may serve the public good.’

This is certainly an important question and, since there are already two companies striving to turn this into a commercial lie detection technology, it seems that now is the time to have the public discussion about creating such a legal right to mental privacy. 

However, at this point I would reluctantly disagree with Dr Wolpe. While there is a lot to be said for protecting personal privacy in this way (both for peace of mind, and as a barrier against potential totalitarian government abuse), guaranteed mental privacy might not be a luxury we can afford. For one thing, if the technology exists, then it will be easy enough for totalitarian governments to introduce it whenever they want, so it is not clear that forgoing its use now will help prevent future abuse at all. Also, while the consideration of a new technology often makes us focus on the rights of the individual who it will be used upon, we must also consider the wellbeing of others. For example, due to the imperfections of evidence, many people in prison did not commit the crimes for which they were convicted and many who are guilty walk free. Reliable lie detection (if possible) would help to rectify this injustice. Perhaps we can resolve this particular issue through voluntary lie detection (the innocent would presumably submit and the guilty would be more obvious by their refusal). However, if mere refusal is a strong sign of guilt, then the right to refuse might be a fairly useless right. 

Given the tremendous social benefits of lie detection, I think we would need stronger arguments against it to warrant a ban.

Share on

3 Comment on this post

  1. Toby: The last paragraph of your post is interesting. First,it suggests that theorizing about the permissibility of mind reading is unnecessary, because the state can do whatever it wants within the limits of its power. Ethical arguments work weakly at best against claims under the rubric “national security”. On the other hand, you propose that there are benefits to persons other than the one whose mind is being read, and present the example of coerced lie detection. But don’t we all accept lie detection efforts in the context of law enforcement? The interesting problem would come up, in this connection, in the context of a trial, in which each witness (not merely the plaintiff, or defendant or complaining witness in criminal matters) is compelled to accept mind-reading lie detection. Here, privacy is weighed against the interests of the state in the legitimacy of trial outcomes. Witnesses have no hand in such a trial; they are compelled to attend to help the court decide. Yet, we already accept subjection of such witnesses to cross-examination which can reveal secrets and cause embarrassment and emotional pain.

    The unspoken confusion her, by the way, is the distinction between truthfulness and accuracy, if the former is based on one’s belief in the accuracy of one’s own statement, and the latter depends on the accuracy of the recollections that underlie that statement, the clarity of the statement and the statement’s completeness. Mind reading cannot test accuracy. How does that cut with respect to the competing interests in privacy and a true report of what occurred?

    I suspect that the balance will always shift in favor of mind-reading and disclosure when we need information about events in order to justify official (e.g. judicial) action. The coercion involved would be held irrelevant unless it was physical and painful. That’s because most people, who aren’t threattened at the moment with mind reading, favor public safety over privacy.

  2. Dennis,

    I wouldn’t say that theorizing about the ethics of mind-reading is unnecessary, since we still want to know whether or not to embrace these technologies within a democratic state. It is just that the argument opening the door to mind-reading in totalitarian states is weak. Note that such an argument is stronger in the case of surveillance, as it takes much longer to get that infrastructure in place and it is easier to utilise the pre-existing infrastructure in devious ways without people noticing.

    I’m not at all sure that everyone does agree with all lie detection efforts in the context of the law (for example, think of a continuum of interrogation strengths), which is why I wanted to at least point out the loss in terms of injustice that we have by not embracing it. Your points about its use on third parties and about accuracy versus truthfulness are good ones.

  3. A very nice question and post. Incidentally, as a legal position, I think that the use of mind-scanning would be entirely consistent with the sorts of legal principles and practices we have now. Consequently, if there is something wrong with mind-scanning in general in legal proceedings, then there is probably something wrong with other practices we have presently (such as compelled provision of DNA).

    Riffing shamelessly off of your post, I launch into these legal aspects of mind-scanning on my blog: http://michaelvyoung.blogspot.com/2009/10/privacy-and-future-law-of-mind-scanning.html.

Comments are closed.