Computer consciousness and ethics
Nature, the prestigious international science journal, often publishes short science fiction stories in a column called “Futures.” According to Nature, “Featuring short stories from established authors and those just beginning their writing career, Futures presents an eclectic view of what may come to pass.” (see here)
As many philosophers and ethicists have recognized, eclectic views of what may come to pass can be philosophically and ethically useful. They may, for example, suggest possible future scenarios that raise difficult ethical questions – questions we ought to begin to sort through now. They may also stimulate insight into important ethical and conceptual questions at the heart of current ethical debates. Consider, for example, a story recently published by Eric Schwitzgebel and R. Scott Bakker. I won’t spoil the story (do read it here), but I want to lift an element of the plot out of context, so I need to say something about it. It involves the creation of consciousness on a computer. More specifically, it involves the generation of a whole society of interacting conscious agents – people like you or me, living in a world they experience, pursuing goals and relationships and all the rest.
The actual possibility of recreating conscious experience via a computer program raises a number of interesting ethical issues (What do the creators owe those created? Should the creators punish those created for their transgressions? Would it be permissible to create such entities? Does the answer to that last question depend on the level of well-being computer-based minds enjoy (but what is well-being, after all?)?). I’m not going to reflect on any of those here. Instead, I want to look at some conceptual issues raised by this kind of story.
Suppose you doubt that phenomenal consciousness can be recreated on a computer. (Quick word from the philosophy of mind: phenomenal consciousness is often said to be indefinable. It is the kind of consciousness that makes red look red, pain feel painful, cheese taste good. Philosophers often say that if you are conscious, there is ‘something it is like’ to be you.) Even so, you should probably admit that given a fancy and powerful enough computer, a subject with human-like cognitive sophistication could be recreated (one that passes the Turing Test). Since it has human-like cognitive sophistication, it seems safe to assume this thing would be aware of itself as a persisting entity – as a subject with internal states that operate very much like our perceptual states and propositional attitudes (our intentions, desires, and beliefs) This thing, let us assume, would have perceptual states, intentions, desires, beliefs, and desires about its intentions, beliefs about its desires, and so on. This thing would be self-conscious.
So now we seem to be conceiving of a computer-based mind that is self-conscious and cognitively sophisticated, but that is not phenomenally conscious. What is the moral status of such a thing?
This question is difficult because standard accounts of moral status begin by assuming that human beings – or at the very least, healthy human adults – are the flag-bearers of moral status. Debates about moral status are usually debates about the kind of moral status (if any) beings such as dogs, pigs, foetuses, and embryos might have. But of course healthy human adults are both phenomenally conscious and self-conscious. Once these are separated, we face a wide range of difficult conceptual questions. For example:
Must a subject be phenomenally conscious in order to have moral status? Is self-consciousness by itself sufficient for moral status? If moral status (as some think) comes in degrees, would the addition of phenomenal consciousness to a self-conscious subject add any degree of moral status (and why)? Given that self-consciousness and cognitive sophistication come in degrees, and given that a subject with greater self-consciousness and greater cognitive sophistication is clearly possible, is it also possible that there could be a subject with higher moral status than a human being (on this issue, see Douglas (2013))?
Further questions can be generated by driving at the heart of typical assumptions about these notions. For example, it is typically assumed that phenomenal consciousness is morally relevant somehow. I share an intuition to this effect. It seems, for example, wrong to cause a subject unnecessary pain, and the phenomenal character of this pain seems to be a part of the reason why. Is this right? What about causing a subject unnecessary boredom? Boredom is quite different from pain, but it is not a good thing. Is the reason causing boredom is wrong the same as (or similar to) the reason causing pain is wrong? But consider a very different case, in which you bring about a state of affairs that a computer-based self-conscious subject – one that is not phenomenally conscious – does not like. This seems bad as well, but of course the answer will have nothing to do with the experiences you cause (since this subject has no experiences). Does the intuition that putting a self-conscious mind in a state of dislike is bad have anything to do with the intuition that putting a phenomenally conscious mind into a state of pain is bad?
Note: for an enlightening discussion of some closely related issues, see Bostrom and Yudkowsky (forthcoming).
Nick Bostrom and Eliezer Yudkowsky (forthcoming). The Ethics of Artificial Intelligence. In Cambridge Handbook of Artificial Intelligence, eds. William Ramsey & Keith Frankish (Cambridge University Press).
Tom Douglas (2013). Human Enhancement and Supra-Personal Moral Status. Philosophical Studies 162(3): 473-497.