Professor Henry Markram, Director of the Blue Brain Project in Switzerland, has told a conference in Oxford that an artificial human brain is achievable within a decade:
What would count as a ‘human brain’ is debatable, of course, but the prospect of an artificial grounding for cognitive and other mental functions raises many fundamental philosophical questions. Some of these are ethical, but the ethical ones themselves tend to rest on issues of metaphysics, in particular those concerning the nature of persons and their identity over time.
Consider the following scenario, adapted from Derek Parfit’s seminal discussion of personal identity in his 1984 book Reasons and Persons. Imagine that at some point in the future you begin to suffer from some debilitating form of mental illness. Because of the amazing advances made by the Blue Brain and other similar projects, it is now possible for scientists to ‘copy’ the functional aspects of your brain (your ‘mind’, one might say) onto a computer. They will then ‘delete’ the dysfunctional elements which underlie your mental illness, and copy back the rest of the mental ‘program’ to your brain. The copying across to the computer usually results in the ‘emptying’ of the brain, so that in effect, while the scientists are carrying out the remedial process, your brain would itself be like an empty hard disk on a computer.
Many people, confronted by cases like this, believe that it would clearly be rational for one to undergo the treatment. One will emerge from it the same person, but cured of one’s mental illness. But now imagine a further twist. The copying takes place as normal, but the mental states correlated with your brain remain in place. The scientists tell you two further things. First, the bad news. Because of some defect in the tissues of your brain, it will cease to function within a few minutes. Second, the potentially good news. Another transfer has failed, and this time the scientists have lost all the data from another brain, leaving them with a ‘spare’, empty brain. They plan to copy your data into that brain instead, so your own mental life can continue. So, they say, you have nothing to worry about.
But how will you – that is, the individual with the brain about to malfunction – feel about the fact that ‘your’ mental life will continue in some other brain? Probably not entirely satisfied. This at least raises the possibility that, in the ‘ordinary’ cases of transference, the person who emerges from the process is not the same as the one that entered into it.
This possibility — brain transplants, and perhaps brain improvement through partial transplants or the sale of rebuilt brains now ends forever the moral difference between man and machine, and justifies Asimov’s suggestion in one of his late short stories that the end of creation is not man, but the robot. This has serious ethical consequences, because a good part of liberal dogma about liberty and autonomy (even if one is a determinist) depends on the peculiar value of the human individual. If that value doesn’t depend on a soul or a loving god, it does depend on OUR special claim to others’ attention, concern, respect and, perhaps, affection. Why? Because what you do to X (a human) you do to me, and when I degrade X (a human) I degrade myself. Now, each of us is improvable by mechanical means and the marketplace determines what one is or will be. Soon, the price of improvements will fall because more demand will call up more supply, etc., and each of us can be rebuilt to be smarter, more creative, etc. to please some consumer of us.
When I discuss the possibility of brain emulation with people, they tend to fall into two groups. One group is convinced that identity is somehow tied to the body or uninterrupted conscious continuity, the other group is convinced that it is linked to the “pattern” making up the mind and hence can be copied and transmitted.
The intriguing thing is that while the pattern identity people are generally in favour of brain emulation, enhancing the emulation like in John’s scenario would seem to be risking their identity. Yet they tend to be rather accepting of this possibility. The continuity people are against any brain-copying, but they also tend to be against even continuous shifts in the brain to fix something. Clearly most people do not hold very consistent views on identity and what traits make them themselves.
My own functionalist answer is that I regard all minds that are sufficiently similar in function to mine as sharing identity with me. Hence I am (at least in theory, we’ll see if it holds up when I’m in the scanning room for real) quite willing to give up the current instance of me for a better me (or mes) in the future. But I also think the category of possible minds that are close enough to be accepted as me is pretty big, allowing at least some enhancements.
Comments are closed.