Transparent brains: detecting preferences with infrared light

Researchers at University of Toronto have demonstrated that they can decode which of two drinks a test subject prefers by scanning their brains with infrared light. (Original paper here.) The intention is to develop better brain-computer interfaces for severely disabled people, but there are obvious other applications for non-invasive methods of detecting what people want. No doubt neuromarketers are drooling over the applications. But the threat to mental privacy might be a smaller problem than the threat of mistaken preferences.

Transparent skulls

We are surprisingly translucent to near infra-red light (try putting a finger on a small red light like a LED; the red light shines through quite well): this is why it is actually possible to scan through the skull. There is also a slight difference in light absorption between blood with bound oxygen and without. This is how pulse oximeters work as medical monitors: they shine infrared light through a finger and detect how the absorption changes with the pulse. If there is too little oxygen or no pulse, the alarm goes off – the patient is in trouble (or has lost the device). Similarly light can be shined through the skull to detect the oxygen differences in the brain as we are thinking. It is somewhat limited to the outermost parts of the brain, but luckily some preference-sensitive cortex is not hidden in a fissure.

During the test signals were recorded as test subjects evaluated different drinks shown as images. Afterwards software was trained based on this data to to predict what their evaluation would be given their brain signals, reaching about 80% accuracy. While this sounds impressive, it means that given a pair of drinks the software would be right 8 times out of 10 rather than 5 times out of 10. Getting the wrong drink one fifth of the time might be annoying in the real world.

Opaque preferences

Different subjects also evaluated drinks differently. Some imagined the taste, others situations where they enjoyed or did not enjoy the drink. But since the software interpreting the signal was individually tuned this did not matter according to the researchers. However, this brings up the real problem: exactly what is being measured here?

The problem is that we do not really know what these preferences are. The subjects were saying how highly they ranked the drinks, but that is very different from what they truly wanted. I like pistachio ice cream more than strawberry ice cream… some of the time. It depends on whether I'm full or cold, it might depend on the colour, it might be partially because it is socially more "stylish" than strawberry and slightly more expensive (we have many such biases). Tell me that it is organic and I might decline because that separate concept is triggering negative ideological associations or just a general sense of obstinateness.  So what are my preferences for the ice cream? Are they encoded in the current state of my medial frontal cortex, are they encoded in the relation of that area to various other areas like the orbitofrontal cortex (which we are not going to see), are they the stable average of these things, how often I actually buy the ice cream, how much I think I enjoy it or how much I actually enjoy it when I get it?

This is everyday fare for philosophers of mind and many cognitive neuroscientists. Most concepts such as "rationality", "intention" or "preference" are much, much harder to pin down than most people think. Sometimes these nuances matter not just to philosophers: saying one likes brand X is different from buying brand it – and liking it.

Now, these subtleties are not going to matter to the neuromarketers (since they are mostly in the business of selling neuromarketing). Or to the people playing with NIR in a few years as a parlour game. Or the politician who wants to have a paedophile test when hiring teachers. Or the bigot who wants to make sure he can check whether the people he is forcing scanning onto have the "right" or "wrong" ideas. They will likely use some folk psychology idea of preferences, mixed with a bit of neurohype, and then base their decisions on that. They might be making somewhat more accurate decisions than if they had been using a dowsing rod, but it is likely they will have systematic biases due to the test setup and a misunderstanding of what preference "is". And once something is apparently objectively measured (and has neuroscience in it!) it tends to be taken far more seriously than it should. Witness current law enforcement use of "brain fingerprinting" or voice stress analysis, methods of unproven or even disproved reliability.

So I'm not worried about the end of mental privacy. I'm worried about mistaking measurements of something for the reality of something else. It is usually better to know that one is underinformed than to think that one is well informed.

Transparent Minds

In the long run, assuming technology indistinguishable from magic, it is an interesting question whether it would be a good or bad thing if we all could truthfully know each other's preferences (for whatever definition of preferences we choose). I think in large it would be useful, in the same sense that more information always is good for Bayesian decision making and economic efficiency. But in real life extra information often imposes computational costs, uncertainties and sometimes affects power-balances.

Also, we need to figure out a humane and just way of dealing with people with "bad" preferences – if someone is a fine person in practice but happens to prefer something truly despicable that they under normal circumstances will never ever do (e.g. they might be a horrific sadist when they think they could get away with it with 100% certainty), how should society treat this information? The person might be partially or wholly responsible for getting the preference (say, by reading too much de Sade in their literature studies), or just happen to have them by accident. As a colleague said, we better figure out how to avoid putting too many people into the "monster" category where they lack the rights we still give to sick or criminal people. There is a problem when the tools for detection/diagnosis outrun the tools for correction, not to mention social tolerance and coping mechanisms. In the transparent society we will need enormous tolerance to avoid imposing extreme comformism.

This concern is even more important when the detection system is likely biased or unreliable. It is much easier to start witch-hunts than to stop them.

  • Facebook
  • Twitter
  • Reddit

Comments are closed.

Recent Comments

Authors

Subscribe Via Email

Affiliations