Skip to content

Computer consciousness and ethics

Nature, the prestigious international science journal, often publishes short science fiction stories in a column called “Futures.” According to Nature, “Featuring short stories from established authors and those just beginning their writing career, Futures presents an eclectic view of what may come to pass.” (see here)

As many philosophers and ethicists have recognized, eclectic views of what may come to pass can be philosophically and ethically useful. They may, for example, suggest possible future scenarios that raise difficult ethical questions – questions we ought to begin to sort through now. They may also stimulate insight into important ethical and conceptual questions at the heart of current ethical debates. Consider, for example, a story recently published by Eric Schwitzgebel and R. Scott Bakker. I won’t spoil the story (do read it here), but I want to lift an element of the plot out of context, so I need to say something about it. It involves the creation of consciousness on a computer. More specifically, it involves the generation of a whole society of interacting conscious agents – people like you or me, living in a world they experience, pursuing goals and relationships and all the rest.

The actual possibility of recreating conscious experience via a computer program raises a number of interesting ethical issues (What do the creators owe those created? Should the creators punish those created for their transgressions? Would it be permissible to create such entities? Does the answer to that last question depend on the level of well-being computer-based minds enjoy (but what is well-being, after all?)?). I’m not going to reflect on any of those here. Instead, I want to look at some conceptual issues raised by this kind of story.

Suppose you doubt that phenomenal consciousness can be recreated on a computer. (Quick word from the philosophy of mind: phenomenal consciousness is often said to be indefinable. It is the kind of consciousness that makes red look red, pain feel painful, cheese taste good. Philosophers often say that if you are conscious, there is ‘something it is like’ to be you.) Even so, you should probably admit that given a fancy and powerful enough computer, a subject with human-like cognitive sophistication could be recreated (one that passes the Turing Test). Since it has human-like cognitive sophistication, it seems safe to assume this thing would be aware of itself as a persisting entity – as a subject with internal states that operate very much like our perceptual states and propositional attitudes (our intentions, desires, and beliefs) This thing, let us assume, would have perceptual states, intentions, desires, beliefs, and desires about its intentions, beliefs about its desires, and so on. This thing would be self-conscious.

So now we seem to be conceiving of a computer-based mind that is self-conscious and cognitively sophisticated, but that is not phenomenally conscious. What is the moral status of such a thing?

This question is difficult because standard accounts of moral status begin by assuming that human beings – or at the very least, healthy human adults – are the flag-bearers of moral status. Debates about moral status are usually debates about the kind of moral status (if any) beings such as dogs, pigs, foetuses, and embryos might have. But of course healthy human adults are both phenomenally conscious and self-conscious. Once these are separated, we face a wide range of difficult conceptual questions. For example:

Must a subject be phenomenally conscious in order to have moral status? Is self-consciousness by itself sufficient for moral status? If moral status (as some think) comes in degrees, would the addition of phenomenal consciousness to a self-conscious subject add any degree of moral status (and why)? Given that self-consciousness and cognitive sophistication come in degrees, and given that a subject with greater self-consciousness and greater cognitive sophistication is clearly possible, is it also possible that there could be a subject with higher moral status than a human being (on this issue, see Douglas (2013))?

Further questions can be generated by driving at the heart of typical assumptions about these notions. For example, it is typically assumed that phenomenal consciousness is morally relevant somehow. I share an intuition to this effect. It seems, for example, wrong to cause a subject unnecessary pain, and the phenomenal character of this pain seems to be a part of the reason why. Is this right? What about causing a subject unnecessary boredom? Boredom is quite different from pain, but it is not a good thing. Is the reason causing boredom is wrong the same as (or similar to) the reason causing pain is wrong? But consider a very different case, in which you bring about a state of affairs that a computer-based self-conscious subject – one that is not phenomenally conscious – does not like. This seems bad as well, but of course the answer will have nothing to do with the experiences you cause (since this subject has no experiences). Does the intuition that putting a self-conscious mind in a state of dislike is bad have anything to do with the intuition that putting a phenomenally conscious mind into a state of pain is bad?

 

Note: for an enlightening discussion of some closely related issues, see Bostrom and Yudkowsky (forthcoming).

 

Nick Bostrom and Eliezer Yudkowsky (forthcoming). The Ethics of Artificial Intelligence. In Cambridge Handbook of Artificial Intelligence, eds. William Ramsey & Keith Frankish (Cambridge University Press).

Tom Douglas (2013). Human Enhancement and Supra-Personal Moral Status. Philosophical Studies 162(3): 473-497.

Share on

13 Comment on this post

  1. If we seriously debate whether AI systems should have moral status we are contributing to the conditions that could continue to distort AI research. Turing claimed ’…that by the end of the century the use of words and general educated opinion will have altered so much that we will be able to speak of machines thinking without expecting to be contradicted.’ (‘Computing Machinery and Intelligence‘. Mind. Vol. lix. No 236. Oct. 1950) Although AI is still nowhere near producing a system that could come close to passing Turing’s rather easy little test, he was nonetheless right in predicting that people would be seriously speaking about machines thinking by the end of the century.

    His paper is very confused and contradictory (as are later papers and broadcasts), but he would have taken issue with you for saying: ‘Since it has human-like cognitive sophistication, it seems safe to assume this thing would be aware of itself as a persisting entity – as a subject with internal states that operate very much like our perceptual states and propositional attitudes (our intentions, desires, and beliefs).’ Much of his paper attempts to get away from making this type of assumption; indeed, he says that ‘The original question, Can machines think? I believe to be too meaningless to deserve discussion.’ Turing believed that the machine he described could pass the ‘Imitation Game’ (Turing Test) which would be “evidence” of intelligence equal to normal human intelligent. (He believed that computers could not be as intelligent as humans at the higher end of intelligence.) He thought that ’Possibly a machine might be made to enjoy [strawberries and cream], but any attempt to make one do so would be idiotic.’ In short, he believed that the machine could be designed to “appear” to have phenomenal consciousness. Ten years later Weizenbaum got simple computers to appear to be sympathetic therapists and then realised how easy people could be fooled into believing machines were human.

    We should perhaps expect that the Turing Test will be passed it in the next 50 years (Turing sometimes thought it would take until about 2050). We would be foolish if we thought this meant that machine intelligence was equal to human intelligence (as I say, it is a very simple limited little test), but no doubt it will be hyped to that effect. Perhaps by then the use of words and general educated opinion will have altered so much that we will also be able to speak of machines as being self-conscious and having moral status. That of course will not make them so.

    I should perhaps add that I have little doubt that machine intelligence using present and foreseeable architecture and information processing technology will greatly increase and in some areas continue to out perform human abilities. I agree with Weizenbaum that this intelligence will be, so to speak, “alien” to ours, not least because it does not have phenomenal consciousness. (Getting one of these machines through the Turing Test should be possible; although there are some like French and Cullen who believe it is too difficult.) Understanding these AI systems and the profound influence they are having and will have on our identity and moral status should be our concern. Of course I could be wrong, but I (like Turing) will require a lot more evidence than passing the Turing Test before I start worrying about the moral status of machines.

  2. Can zombies feel aversion? Or is it a lesser type of aversion, aversion*? I ask since the zombie case seems in one way similar to something like the following Parfit-esque case.

    Suppose you wake up one day not remembering the day before. I inform you that I put you through extreme misery, causing you immense pain throughout the day. Then I gave you one of GOB Bluth’s enhanced forget-me-now pills and you remember none of it. Still, I describe the case to you in great detail. I would suppose you would be horrified – you’d have a cluster of intentional attitudes about what happened to you that would indicate that you have been wronged. For instance, you’d have the belief you were in pain; you’d believe I have the capacity to hurt you again; you’d have a first-personal blaming attitude (not merely third person thought about blameworthiness) and believe that I should be punished; you’d be fearful of it happening again; you’d have the desire not to suffer through such things; you’d desire to get away from me at all costs; you’d intend to get revenge; etc. You’d have a strong aversion, or perhaps aversion*, to something like that ever happening again.

    And yet, you’d have no idea what it was like. It wouldn’t phenomenally feel any way to you at all on the day you are informed of this. You’d have no sense of “what it was like.” Of course, in this case, you’d have phenomenal experience on the day it occurred, and me torturing you on that day would be horrifically wrong. But, it seems to me that the lasting effects of the harm continue. The harm is not isolated only to the phenomenal experience, but extends into the next day when all that’s left is the intentional attitudes. Perhaps there are certain types of harms available only to beings capable of phenomenal consciousness, but it seems to me some harms wouldn’t require this capacity.

    That’s the way it seems to me, anyway.

  3. Does the intuition that putting a self-conscious mind in a state of dislike is bad have anything to do with the intuition that putting a phenomenally conscious mind into a state of pain is bad?

    I am not sure I undestand. The assumption that a self-conscious computer would lack phenomenal consciousness seems to rule out the possibility that it be in a state of dislike — if only because being in a state of dislike is essentially a phenomal state of affair.

    Moreover as Jeremy Bentham famously argued, it is insofar as it implies the possession of morally relevant interests that the disposition to feel pain grounds the moral status of beings; for instance phenomally conscious animals that apparently lack self-consciousness (I’m thinking of animals whose cognitive and neural capacities are at most below that of, say, apes, elephants and dolphins) seem to have a moral status nonetheless precisely because their disposition to feel pain indicates that they hold important interests — interests worth of the protection of moral norms.

    Side note on the methodology: isn’t a bit risky to use metaphysically shaky and unstable ground to base ethical considerstation? For instance it is not clear at all whether consciousness (let-alone self-consciouness) could emerge on non-biological properties, to the effect that including the relevant biological properties would, at the end of the day, bring us back to consider species of the kinds I noted above.

    Note to the moderator: please delete my post above. I made a typo in the “blockquote” tag.

  4. Hi Everybody,

    Thanks very much for these interesting comments.

    Keith,
    Whether the Turing Test is sufficient as a measure of human intelligence is an interesting issue, and you’re right to note that claiming so would be controversial. I suppose I’m more interested here in whether a machine could have human-like psychological organization, as well as human-like cognitive sophistication. I think the issues you raise, concerning AI machines and the effects they may have on our identity and moral status, are interesting and deserve reflection. In this post I was mainly interested in the idea of a self-conscious machine as a kind of pair of conceptual tweezers – a way to separate phenomenal and self-consciousness – for the purpose of considering the composition of our own moral status, and that of similar organisms.

    Steve-O,
    After reading your post, I wish I could take one of GOB’s pills. I kid. That one could be harmed without possessing the capacity for phenomenal consciousness (now? ever?) is an interesting possibility. Harm to self-conscious but not p-conscious machines might be one example. But regarding your thought experiment, mightn’t one claim that part of the harm being done is in telling you about all the awful things I did you, such that you come to believe it and thus have traumatic conscious experiences? If so, I’m not seeing how the example supports your claim at the end there.

    Andrew,
    The claim that dislike requires phenomenal consciousness is an interesting claim. I think it might be too strong, but I’d be interested in arguments that support it. Regarding your methodological point: certainly work on the metaphysics of consciousness is full of disagreement. That’s also true of a number of other concepts that figure in ethical analysis: persons, causation, events, properties, intentions, knowledge, minds, and so on. It seems to me it is difficult to enter into ethical reflection with certainty that one’s current views and assumptions are on solid metaphysical ground. It is true that when making ethical claims about a controversial metaphysical topic like consciousness, one’s arguments will often support conditional claims at best – if theory X about the nature of consciousness is true, then ethical claim Y is true (or something like that). But it looks like that’s where we are.

    1. On the association between phenomenal consciousness and the so-called conative states
      I take it that to be in a conative state (i.e. a pro-attitude), like a preference, a desire, an appetite, etc., has the disposition to have feelings as a precondition (some view might even make the stronger claim that conative states not only presuppose a disposition for feelings, but are consituted by such states on the ground that conative states are often associated with a certain experience — a “what it is like to be in this state”).

      The idea common to both the weak and the strong views is simply that what makes a conative state an intentional state — a state that is about something rather than nothing — is its dependence on some intentional state that is entailed by (constitutive of, according to the stronger view) the conative state itself. Furthermore, it seems that the best candidates (inference to the best explanation) out there for this role are feelings.

      Of course we might want to take a very minimal view on feelings to make these claims more plausible; yet it follows from either view that there are no likes or dislikes without feelings (let alone without a disposition to feel if what is meant by “a like” corresponds to a dispositional state rather than its concrete manifestation that is necessarily associated to a certain feel.

      On the metholdological side note
      I think you do not draw all the consequences that seems to me to follow from my post above; namely that the intuitions that matter when we ask about the relationship between moral status and phenomenal consciousness can be stirred up without embarking into very risky metaphysical scenarios — it actually suffices to consider marginal cases that biology already provides us with.

      1. Missing sentence: Which is the most plausible view of the mind since the mind emerged on and from our biological setup.

  5. Joshua

    I am aware of what you were trying to do in your post, but, as I said, if we assume that machines are or will be conscious and have moral status we are supporting the fiction. You did say ‘it seems safe to assume this thing would be aware of itself as a persisting entity…’ You may think it a throw away line that gets you to the meat of your discussion, but we do not have to state that god exists as a persisting entity before discussing what kind of entity it would be if it existed. I do not think I am being picky given Turing’s prediction about the meaning of words and educated opinion.

  6. Hi Keith,

    Fair enough. I don’t agree that the *possibility* of machine consciousness is a fiction. Depending on the truth about the nature of phenomenal consciousness, machine phenomenal consciousness might very well be viable. That a machine could be self-conscious – aware of itself as a persisting entity – I also take to be viable. Just to clarify: do you think it impossible that a machine could be phenomenally conscious, that a machine could be self-conscious, or both?

    1. Hello Joshua,
      I agree with Keith, your assumption is both wild and unnecessary.
      Turing had the merit of being completely explicit in what he was talking about in inventing his test : what could help us attribute a form of intelligence to a machine, intelligence being defined in a precisely delimited way.
      So the interesting question for you is what could count as evidence that a machine had phenomenal consciousness or self-consciousness? What criteria would you use to decide one way or the other ?

  7. Hi Anthony,

    Thanks for weighing in. So it sounds like you think it is impossible, or at least wild and unnecessary, to think (or maybe just to assume) that a machine could be self-conscious. But just to reiterate, the thing I’m doing in my post has very little to do with the question of whether machines could actually be self-conscious. It is a conceptual exercise in separating self-consciousness from phenomenal consciousness. Maybe that’s impossible – that would be an interesting fact about our concepts of consciousness. I have been assuming the opposite for the sake of discussion.

    Regarding the point about what would count as evidence: as I’m sure you know that’s difficult to elucidate clearly, especially in blog comments. I don’t want to skirt the issue, but my concern in this blog post does not depend on how we might tell if machines are actually conscious in some sense. I’m concerned here with the relations between phenomenal consciousness, self-consciousness, and moral status. If machines get sophisticated enough so that we begin to worry they are conscious in some sense, these questions will be relevant to our interactions with machines. But these questions are already relevant to our interactions with one another, and to non-human animals. That latter point was what I was trying to emphasize.

  8. Thanks for your reply, Joshua
    What I was commenting on was this part of your post :
    « Since it has human-like cognitive sophistication, it seems safe to assume this thing would be aware of itself as a persisting entity – as a subject with internal states that operate very much like our perceptual states and propositional attitudes (our intentions, desires, and beliefs) »
    I still think that this assumption is pretty UNsafe, or as I perhaps over-dramatically expressed it, wild.
    I also think it unnecessary, as I think it tends to hide your main point rather than illustrate it, and tends to confuse three different concepts, ie cognition, self-consciousness and phenomenal consciousness.
    So sorry if i missed the thrust of what you were saying.
    To which, a series of rather unrelated questions (but you’re quite right – within the constraints of a blog, it’s not easy !) :

    Can we imagine a person who shows self-consciousness (can answer questions in a Turing test indistinguishably from another person, let us say) but has no phenomenal consciousness ?
    Can we write a story putting ourselves into the mind of a pocket calculator ? If not, what does it have to grow into before we can ? What behaviours will it need to manifest ?
    Can we, more generally, conceive of self-consciousness without phenomenal consciousness ? Why not ? the latter without the former seems quite conceivable.

    Hence my question on criteria : what counts as evidence of consciousness ?

    I would suggest that until we clarify this sort of qustion we obscure questions of moral status rather than help tease them out. (But that is a personal view constrained by my limited cognitive capacity…)
    ,

    1. Joshua (I posted this as a reply to Anthony as is follows on)

      As I said above, it would seem extremely unlikely that today’s computers or machines in the foreseeable and somewhat distant future will be conscious (let us not make distinctions at this stage). The theory of “bit consciousness”, as I have termed it, has been with us for over half a century. Chalmers defends it by saying, ’Someone who finds it “crazy” to suppose that a thermostat might have experiences at least owes us an account of just why it is crazy.’ (The Conscious Mind: in search of a fundamental theory’.O.U.P.1996) Of course there are numerous accounts of why it is crazy but, as usual, AI apologists simply ignore criticism. If and when we develop systems that do not rely on animism to make them “appear” conscious we might be able to assess the evidence objectively. Not that AI is interested in objectivity, for as Dennett puts it,

      ‘Lingering doubts about whether the chess-playing computer “really” has beliefs and desires are misplaced…one can explain and predict their behaviour by “ascribing” beliefs and desires to them, and whether one calls what one ascribes to computers beliefs or belief-analogues or information complexes or intentional whatnots makes no difference to the nature of the calculations one makes on the basis of the ascriptions’ (’Intentional Systems’ in Brainstorms, Montgomary, 78)

      So you can think what you like about whether computers have phenomenal consciousness or self-consciousness or believe they are in love. At one level he is correct, but at the level of scientific and philosophical analysis it is nonsense. I agree with Anthony that we need to clarify this issue before engaging in serious discussion about conscious machines. What do you mean by ’If machines get sophisticated enough so that we begin to worry they are conscious in some sense’? What is a ‘sophisticated ‘ machine? What ‘we’? Remember some AI researchers and “philosophers” have been worrying for decades that their thermostats might be having a bad experience (that I am assured is without the use of psychedelic drugs). We are not going to get closer to an understanding of consciousness and be able to make proper distinctions if we unquestioningly assume that machines are or will be conscious. Not sure what you mean by, ‘Depending on the truth about the nature of phenomenal consciousness, machine phenomenal consciousness might very well be viable.’ Can this ‘truth’ be discovered by philosophical and/or machine analyses?

  9. That’s why I asked whether zombies can experience an aversion or if it’s better to call it aversion*. I’m never really sure of the extension of phenomenal experience, so I don’t want to beg any questions. In that scenario, it’s still the case that you’ll have a cluster of beliefs, desires, intentions, (that I’ve harmed you, that I’m a bad person, that you’ll get revenge…) and at least the intentional components of emotions such as fear or aversion (on the supposition that phenomenal experiences are parts of emotions; if they aren’t then a zombie has emotions full stop) which would be either unpleasant or unpleasant*, and lead you to want to stay far away from me. You’d have an intrinsic dislike of me (much like you do now) due to what happened before. That cluster seems to me to be something that one morally ought to avoid causing, one ought to prevent happening from others if one can, is something a virtuous person wouldn’t cause if they can help it, and so on. You’ll have an aversion*, at the least, of a kind that non-sophisticated animals have to poison or food they can’t process. That seems sufficient evidence that you’re experiencing a harm, even without conscious experience.

Comments are closed.