Written by Stephen Rainey
Brain-machine interfaces (BMIs), or brain-computer interfaces (BCIs), are technologies controlled directly by the brain. They are increasingly well known in terms of therapeutic contexts. We have probably all seen the remarkable advances in prosthetic limbs that can be controlled directly by the brain. Brain-controlled legs, arms, and hands allow natural-like mobility to be restored where limbs had been lost. Neuroprosthetic devices connected directly to the brain allow communication to be restored in cases where linguistic ability is impaired or missing.
It is often said that such devices are controlled ‘by thoughts’. This isn’t strictly true, as it is the brain that the devices read, not the mind. In a sense, unnatural patterns of neural activity must be realised to trigger and control devices. Producing the patterns is a learned behaviour – the brain is put to use by the device owner in order to operate it. This distinction between thought-reading and brain-reading might have important consequences for some conceivable scenarios. To think these through, we’ll indulge in a little bit of ‘science fiction prototyping’.
Science fiction prototyping is a way to project reasonable expectations for technologies into a near-future so that probable consequences from those technologies can be anticipated and addressed in the present, if necessary. In terms of BMIs and BCIs, we can imagine easily that, in a near-future scenario, they will migrate from therapeutic settings into the more everyday. This is already the case, to some extent, with some technologies. But we can imagine this being extended further.
Rather than bio-mimetic prostheses, replacement limbs and so on, we can predict that technologies superior to the human body will be developed. Controlled by the brains of users, these enhancements will amount to extensions of the human body, and allow greater projection of human will and intentions in the world. We might imagine a cohort of brain controlled robots carrying out mundane tasks around the home, or buying groceries and so forth, all while the user gets on with something altogether more edifying (or does nothing at all but trigger and control their bots). Maybe a highly skilled, and well-practised, user could control legions of such bots, each carrying out separate tasks.
Before getting too carried away with this line of thought, it’s probably worth getting to the point. The issue worth looking at concerns what happens when things go wrong. It’s one thing to imagine someone sending out a neuro-controlled assassin-bot to kill a rival. Regardless of the unusual route taken, this would be a pretty simple case of causing harm. It would be akin to someone simply assassinating their rival with their own hands. However, it’s another thing to consider how sloppily framing the goal for a bot, such that it ends up causing harm, ought to be parsed.
Let’s imagine I send my neuro-controlled grocery-bot out to get food for the evening. Through insufficient foresight, I fail to control the bot well enough and it mows down 40 people at the deli counter as it pursues my goals. If we think about this in terms of legal culpability, I as the neuro-controller, don’t have the mens rea or actus rea as I neither intend to harm the people, nor do I physically carry out the harm. Yet the cause of the harm was indeed under my control. My neural activity was, or at least ought to have been, steering the bot’s activity. My neural patterns were insufficient to avoid the harm occurring.
The user of a neuro-controlled bot is blameworthy for the production of non-optimally instrumental neural patterns of activity. This is an unusual way to think of blaming people, but it seems appropriate in cases of BMIs gone awry. It might be the case that BMIs, and such technologies, ought to be thought of in similar ways to driving. If I am inattentive at the wheel, I’ll get points on my licence, I might be required to attend extra courses on road safety, I may even have my licence revoked and serve time in prison. In the sci-fi scenario, the sloppy brain-work of the bot-controller leads to harm. The user is due some negative consequences for that.
Back in the present, it might be useful to think about BMIs etc. very explicitly in terms of instruments driven by the use of the brain. In terms of driving, if I physically move my hands such that they steer the car in a way that violates the rules of the road, or harms somebody, that physical act leads to wrong. This is what ought to be transplanted into neuro-control cases – the physical manifestation of neural activity, in being learned, and practiced, is an act just like steering the car with my hands. We can’t easily look at the neural case, as it’s under the skull, but it is nonetheless a physical act.
While it probably does seem unusual to think of someone as using their brain in this instrumental sense, it will likely serve us better in conceiving of these kinds of technologies. Talk of ‘thought controlled’ devices, on the other hand, misleads. If we talk in that way, it seems harder to make sense of blaming me for the harm my bot causes at the deli. My thought was ‘get some prosciutto’, but the act, the blameworthy part, was a realisation of neural activity that didn’t provide enough robot control to avoid the other customers.
Thank you, Stephen, for this. I don’t share your mind-body dualism. But I agree that you are responsible for the results of your robot’s actions, in the case described.
Doesn’t this imply that dualism is not necessary? (I could argue that it’s false, but as Gilbert Ryle has already done a pretty good job, I won’t try – I’ll just leave the weaker case.)
Hello Anthony, and thank you for your thoughts.
I think the point that dualism is unnecessary is one worth making. In fact (and maybe despite some phrasing in the blog post) I think the thrust of my piece can be made while being agnostic about dualism, materialism, eliminativism, or any shade between.
The interesting thing I’m pointing to is the similarity between things like bodily movements – which we know how to deal with in terms of responsibility-ascriptions – and neural activity, where this controls devices. We’re maybe not so sure how to think about ascribing responsibility in the latter case, as the brain seems an obscure object to think of as an instrument. Nevertheless, technologies will challenge this view.
I think it is interesting that we would be unlikely to think of discussing dualism were we to talk about me using my hand to do something, but it seems dualism is bound to come up when we talk about me using my brain to do something. If we (collectively, not just you and I) can get clearer on why this is, prompted by technological advance, then that would be a good thing.
Thank you, Stephen, for taking the time to reply.
When you claim that « dualism is bound to come up when we talk about me using my brain to do something » isn’t that tautological ? That is, by using the expression « me using my brain » you are already in a dualistic frame of mind in which « me » and « my brain » are two distinct things.
I’m not sure, either, that talking of « unnatural patterns of neural activity » helps your case : what does unnatural mean here ? Doesn’t riding a bicycle or playing a violin equally entail unnatural patterns ?
I agree that we need to think about the ethics of BMIs, and the responsibilities of their actions, but still doubt that discussing the subject in terms of me using my brain will clarify the issues. Besides, isn’t it the case that we are responsible for actions comitted when we do not « use our brains » ?
Hello again,
I’m not sure that a linguistic distinction implies an ontological one, rather than merely the use of different vocabularies. We can talk in different ways about the same thing without it thereby meaning there are two things. Think ‘semantic ascent’. The ‘using my brain’ idiom is, here, supposed to highlight that talking about the brain as an instrument seems peculiar, but may be necessary in neuro-controlled devices.
On ‘unnaturalness’ yes, all of your examples are things are unnatural in the same sense as I require. They don’t merely arise, but are learned, and can be done well, badly, and can be criticised or praised accordingly. Again, the point is to highlight the seeming peculiarity of praising or blaming someone for their neural activity. Nevertheless, that’s what the realities of neuro-controlled devices may require.
Thanks for your reply, Stephen !
Perhaps the difference between us, if one really exists, is that you find that it seems peculiar to praise or blame someone for their neural activity, and that I materialistically believe that all our actions are the result of neural activity. So I don’t find it at all peculiar.
Interesting that you cite a concept of Quine, though – I would have guessed that he would use his particular version of Occam’s razor to question the need to talk of two categories to discuss the ethics of BMIs if one alone would do.
Stephen Rainey, thanks so much for the post.Much thanks again. Really Cool.
Thanks, iulias. Glad you liked it.
Stephen Rainey,thanks for the article post.Really thank you! Great.
Comments are closed.