Skip to content

Robot Girl: A Survey

  • by

In collaboration with the BBC’s Radio 4 show ‘The Philosopher’s Arms‘, we are running a series of short opinion surveys on the Practical Ethics blog as a way of promoting discussion on issues in practical ethics.

This week The Philosopher’s Arms discussed the case of the Robot Girl, in which we consider the ethical problems arising from the development of machines who act, think and feel like human beings. What is it to think? What is consciousness? If a robot made of silicon can be made to seem like one of us, would we say of it that it can think? If it walked like us, talked like us, screamed in apparent agony when it was hit, would we, should we say it was conscious?

The following survey explores some of these ethical themes.

 

 

To all our visitors from the BBC Website: if you found ‘The Philosopher’s Arms’ interesting, you may be interested in reading other posts on this blog, the Practical Ethics blog hosted at Oxford University’s Uehiro Center for Practical Ethics. Blog posts are contributed by philosophers and philosophy students from Oxford and other major universities around the world.
 
Some recent highlights:

Share on

6 Comment on this post

  1. Interesting survey, but a quick suggestion for future iterations: add a "don't know" option for each item. As it stands, I skipped items where I was uncertain, but it would be good to have this as an explicit option (esp. with relation to robot-type questions, where it seems to me one needs more information before one can judge consciousness, moral status, etc.).

    1. Yes the lack of a 'don't know' has skewed my responses to this. Or at least caused me to behave differently to you. I took it to be a forced choice and therefore answered no to is the robot intelligent or conscious when what I really meant was that I can't really know on the basis of an hour's conversation. Or at least I can't know whether I would know.

      I might be convinced one way or the other but without knowing how the conversation actually went how can I know? Obviously having been convinced one way or the other I might still be wrong. Since I don't really know what consciousness is it's hard to say what would or wouldn't convince me.

      I guess the point of the Turing test is that the test itself is a definition of consciousness. However, I don't really buy that so how do I answer the question? Skipping it really doesn't cover it. I answered no because I don't believe the Turing test is necessarily good enough but that "no" may impliy that I think consciousness is impossible in robots which I don't.

      I'm going on a bit aren't I? I should forget about it and get on with life.

    2. I would rather have a non-answer than an answer. I believe they are attempting to have a definate response as opposed to a response that does not specifically answer the question with a definate.

  2. I would have liked to be able to give more than a "yes" or "no" in several cases.

    <b>Is the machine concious?</b>
    Human conciousness – in so far as it exists at all – is not beyond physics. Why? Well, because "conciousness" exerts an influence on the real world (<i>I would describe many of my actions as being caused by my conciousness</i>). And unless you are willing to permit a regular breaching of the laws of physics this implies that my conciousness is a part of the physical world, like everything else. Once we've agreed that conciousness is not in any way magical we should subject our understanding of conciousness to the science of evolution. We know that complex forms will not come into existence unless their complexities can be introduced slowly over many generations with a continuous improvement in relative likeihood of survival. This implies that for a new region of the brain to evolve it must actually be capable of interacting with the rest of the brain, i.e. it cannot exist as a wire-tapping secret agent hiding silently and passively in the shadows. (At this point lets explicitly rule out the possibility of conciousness using neural data to perform a non-neural function such as filtering the blood. If this were the case I imagine modern medicine would have identified it by now.) So we've established that conciousness interacts with the rest of our brain, which means we can describe conciousness as being one of the parts in the algorithmic chains of human behaviour.
    And now we are ready to address the actual question. If the machine produces the same repertoire of behaviour as a human (and in the appropriate circumstances) then it is possible that the algorithms that the machine is using contain analogues of the conciousness modules that humans have. In which case we should declare that it is just as concious as a human. But there is no reason to assume (except perhaps parsimony) that the machine's algorithms function in the same way ours do, it may achieve the same behaviour using a different internal process (<i>the output from an inkjet printer and a laserjet printer are nominally the same yet they are produced by two very different algorithms</i>).
    In summary, a human-like robot may or may not be as concious as us. It depends how it works. In fact, since we don't know how it works, it's possible that the machine is actually more concious than us!!

    <b>Should we treat the machine like a human?</b>
    If the machine has a standby mode (on/off button) then clearly it is not possible to treat the machine as a human since every decision you make must at some level include the possibility of putting it on standby. It would also presumably be capable of backing itself up, restoring to previous versions, and various other computery type things that are not traditionally possible with humans. Not permitting it to perform any of these extra-human feats (some of which affect its longevity without causing harm to others) would not be in the spirit of human rights, the set of rules aspiring to uphold the most important aspects of morality.

    And to give a rather different answer….unlike in discussions of abortion, euthanasia, and vegetarianism, there is no slippery slope with the human/robot divide….at least I don't think there is. What responsibility do we have for looking after a species that is categorically separate from our own? Personally I'm not sure one way or the other.

    What I do know – or rather I assume – is that our brains are hard wired to identify conciousness in the world around us. Inanimate objects can be trusted to sit still, or at least behave according to simple rules. This means we don't require much computation to predict the motion of inanimate objects. Complex creatures on the other hand are hard to predict and presumably modelling them occupies large amounts of neural real-estate. Thus in order to model the world optimally, the brain must be capable of quickly identifying what is concious and what is not, so that objects can be assigned to the relevant neural apparatus. Which means that if something appears outwardly concious we will inevitably be inclined to treat it as concious. Not doing so would be unnatural for us – and probably feel immoral. And since the study of morality is just all about describing our cognitive sense of good and bad, doing something that feels immoral is indeed likely to be regarded by other people as immoral.

  3. I would just like to say that many of these questions are not a simple yes or no. When asked if the government could take and examine my robot child (whom I presumably love), it assumed the only two responses were a lawsuit or a disregard for the child completely. I would want to know what the scientists planned to do and, most importantly, if they would hurt her. Also, if my husband and I decided that she needn't be studied, a lawsuit would not be the first answer. Most people would start with a simple, "No."
    As far as my grandmother who hates parties, we'd throw one anyway. Not because she wouldn't remember, but because birthday parties that are as big as that are never JUST for the person gaining a year. If she didn't have alzheimer's, she would still have to suffer through a 100th birthday party.

  4. The Philosopher's Arms daughter was sophisticated enough to have developed a taste for ice-cream. It took the philosopher a matter of years to disover that she was not "human". He has another daughter. If he abandons or in any way discriminates against the robot, what effect will this have on his "natural" daughter? She will have noted her father's capacity to change in his attitude and how will she be able to establish to her own satisfaction that she is different from the abandoned "child"? Kant suggests that animals may lack rights and consciousness but that the way in which we treat them is a measure of our humanity.

Comments are closed.