HBO’s new show Westworld has been getting a lot of attention. As the AV Club pointed out, it was HBO’s highest-rated premiere since ‘the good True Detective’ (i.e., since season one). The first episode involved a robot with human-like intelligence going through a truly horrible day to cater to the whims of actual humans, and then having her memory erased so she could do it again and again.
Among other (surely more interesting) properties of the show, there is this: the show functions as an extended philosophical thought experiment. Through philosophical thought experiments, experimenters probe our imagination and our intuitions to reveal the things and the ways that we think about important philosophical issues. One’s reactions to Westworld are philosophically illuminating.
Consider this question: is phenomenal consciousness morally valuable? Don’t worry too much about what it means to say something is morally valuable. Does an entity’s possession of consciousness motivate you to treat it in certain ways – to care about it and whether things are going well for it, to worry about how it is doing, to avoid hurting it? One way to explore this question is to imagine how you would feel about and treat the entity if you discovered it lacked consciousness. If you found out it was just a robot, would your care for the thing begin to dissolve? It probably would, and a show like Westworld plays with this reaction. The common reaction of horror to the revelation that some poorly treated machine is actually conscious is predicated on the deeply held belief that consciousness is morally valuable.
Consider a further question: once we know an entity is conscious, what other features of their mental life influence how we feel about it and treat it? Westworld places memory under the philosophical microscope. And it turns out that our moral thinking tracks the complex interaction between an entity’s capacity for memory and its capacity for consciousness. On the one hand, we are relieved that a poorly-treated entity cannot remember episodes of poor treatment (or are we? Is even this a moral imposition?). On the other hand, to deprive an entity of a capacity for memory seems morally wrong. In barring an entity from developing memories regarding its experiences, we rob that entity’s life of coherence, and we take away the possibility of the longer-term projects that give our lives rich shades of meaning.
The way we think about the interaction of memory and consciousness is philosophically interesting for a number of reasons. There’s a philosophical thought experiment by Oxford’s Roger Crisp that goes like this. Would you rather live the normal human life of some fairly happy and intelligent person (Crisp picks composer Joseph Haydn) or instead live an indefinitely long life as a relatively sophisticated Oyster, provided this Oyster’s life is full of nothing but mild sensory pleasure – say, the pleasure ‘experienced by humans when floating very drunk in a warm bath’ (see Crisp, ‘Hedonism Reconsidered,’ in Philosophy and Phenomenological Research 2006, p. 630).
Crisp’s thought experiment makes me want to leave work, grab a bottle of gin, and draw a bath. It also draws on a difficult question that goes back at least as far as Plato. How are we supposed to add up the many little pleasures and pains that at least partially compose the quality of our lives? Is it better to be the Oyster – who we can imagine has no and needs no memory? I submit that our reaction to Westworld suggests not. Without a memory, the Oyster would be deprived of something exceedingly valuable, and even though we’d be jealous of it at moments, there’s something about having the capacity to engage in the richer forms of living that memory enables that we would not want to do without.
That’s just me talking, of course. Is memory morally valuable? Of course. How does it interact with the value of consciousness? It looks like the contributions memory makes depend on the quality of the experiences it encodes and the quality of the lives it enables.