Guest Post by John Danaher (@JohnDanaher)
This article is being cross-posted at Philosophical Disquisitions
I recently published an unusual article. At least, I think it is unusual. It imagines a future in which sophisticated sex robots are used to replicate acts of rape and child sexual abuse, and then asks whether such acts should be criminalised. In the article, I try to provide a framework for evaluating the issue, but I do so in what I think is a provocative fashion. I present an argument for thinking that such acts should be criminalised, even if they have no extrinsically harmful effects on others. I know the argument is likely to be unpalatable to some, and I myself balk at its seemingly anti-liberal/anti-libertarian dimensions, but I thought it was sufficiently interesting to be worth spelling out in some detail.
For the detail, you’ll have to read the original paper (available here, here, and here). But in an effort to entice you to do that, I thought I would use this post to provide a brief overview.
1. What is robotic rape and robotic child sexual abuse?
First things first, it is worth clarifying the phenomena of interest. I’m sure people have a general sense of what a sex robot is, and maybe some vaguer sense of what an act of robotic rape or child sexual abuse might be, but it’s worth being as clear as possible at the outset in order to head-off potential sources of confusion. So let’s start with the notion of a sex robot. In the article, I define a sex robot as any artifact that is used for the purposes of sexual stimulation and/or release with the following three properties: (i) a human-like form; (ii) the ability to move; and (iii) some degree of artificial intelligence (i.e. an ability to interpret, process and act upon information from its environment).
As you can see from this definition, my focus is on human-like robots not on robots with more exotic properties, although I briefly allude to those possibilities in the article. This is because my argument appeals to the social meaning that might attach to the performance of sexual acts with human-like representations. For me, the degree of human-likeness is a function of the three properties included in my definition, i.e. the more human-like in appearance, movement and intelligence, the more human-like the robot is deemed to be. For my argument to work (if it works at all) the robots in question must cross some minimum threshold of human-likeness, but I don’t know where that threshold lies.
So much for sex robots. What about acts of robotic rape and robotic child sexual abuse? Acts of robotic rape are tricky to define given that legal definitions of rape differ across jurisdictions. I follow the definition in England and Wales. Thus, I view rape as being non-consensual sexual intercourse performed in the absence of a reasonable belief in consent. I then define robotic rape as sexual intercourse performed with a robot that mimics signals of non-consent, where it would be unreasonable for the performer of those acts to deny that the robot was mimicking signals of non-consent. I know there is some debate as to what counts as a signal of non-consent. I try to sidestep this debate by focusing on what I call “paradigmatic signals of non-consent”. I accept that the notion of a paradigmatic signal of non-consent might be controversial. Acts of robotic child sexual abuse are easier to define. They arise whenever sexual acts are performed with robots that look and act like children.
Throughout the article, I distinguish robotic acts from virtual acts. The former are performed by a human actor with a real, physical robot partner. The latter are performed in a virtual world via an avatar or virtual character. There are, however, borderline cases, e.g. virtual acts performed using immersive VR technology with haptic sensors (such as those created by the Dutch company Kiiroo). I am unsure about the criminalisation argument in such cases for reasons that will become clearer in a moment.
2. What is the prima facie argument for criminalisation?
With that definitional work out of the way, I can develop the main argument. That argument proceeds in a particular order. It starts by focusing on the purely robotic case, i.e. the case in which the robotic acts have no extrinsic effects on others. It argues that even in such a case, there may be grounds for criminalisation. That gives a prima facie argument for criminalisation. After that, I focus on extrinsic effects, and suggest that they are unlikely to defeat this prima facie argument. Let’s see how all this goes.
The prima facie argument works like this:
- (1) It can be a proper object of the criminal law to regulate conduct that is morally wrong, even if such conduct has no extrinsically harmful effects on others (the moralistic premise).
- (2) Purely robotic acts of rape and child sexual abuse fall within the class of morally wrong but extrinsically harmless conduct that it can be a proper object of the criminal law to regulate (the wrongness premise).
- (3) Therefore, it can be a proper object of the criminal law to regulate purely robotic acts of rape and child sexual abuse.
I don’t really defend the first premise of the argument in the article. Instead, I appeal to the work of others who have. For example, Steven Wall has defended a version of legal moralism that argues that actions involving harm to the performer’s moral character can, sometimes, be criminalised; likewise, Antony Duff has argued that certain public wrongs are apt for criminalisation even when they do not involve harm to others. I use both accounts in my article and suggest that if I can show that purely robotic acts of rape and child sexual abuse involve harm to moral character or fall within Duff’s class of public wrongs, then I can make the prima facie case for criminalisation.
This first premise is likely to be difficult for many, particularly those with a classic liberal or Millian approach to criminalisation. They will argue that only harm to others renders something apt for criminalisation. I sympathise with this view (which is why I am cagey about the argument as a whole) but, again, appeal to others who have tried to argue against it by showing that a more expansive form of legal moralism need not constitute a severe limitation of individual liberty, or who have pointed out that it may be very difficult to consistently hold to the liberal view. I also try to soften the blow by highlighting different possible forms of criminalisation at the end of article (e.g. incarceration need not be the penalty). Still, even then I accept that my argument may simply lead some to question the moralistic principles of criminalisation upon which I rely.
Premise two is where I focus most of my attention in the article. I defend it in two ways, each way corresponding to a different version of legal moralism. First, I argue that purely robotic acts of rape and child sexual abuse may involve harm to moral character. This is either on the grounds that the performance of such acts encourages/requires the expression of a desire for the real-world equivalents, or on the grounds that the performance requires a troubling insensitivity to the social meaning of those acts. This is consistent with Wall’s version of moralism. Second, I build upon this by arguing that the insensitivity to social meaning involved in such acts (particularly acts of robotic rape) would allow for them to fall within Duff’s class of public wrongs. The idea being that in a culture that has condoned or belittled the problem of sexual assault, an insensitivity to the meaning of those acts demands some degree of public accountability.
In defending premise (2) I rely heavily on work that has been done on the ethics of virtual acts and fictional representations, particularly the work of Stephanie Patridge. This reliance raises an obvious objection. There are those — like Gert Gooskens — who argue that our moral characters are not directly implicated in the performance of virtual acts because there is some distance between our true self and our virtual self. I respond to Gooskens by pointing out that the distance is lessened in the case of robotic acts. I rely on some work in moral psychology to support this view.
That is my defence of the prima facie argument.
3. Can the prima facie argument be defeated?
But it is important to realise how modest that argument really is. It only claims that robotic rape and robotic child sexual abuse are apt for criminalisation all else being equal. It does not claim that they are apt for criminalisation all things considered. The argument is vulnerable to defeaters. I consider two general classes of defeaters in the final sections of the paper.
The first class of defeaters is concerned with the possible effects of robotic rape and robotic child sexual abuse on the real-world equivalents of those acts. What if having sex with a child-bot greatly reduced the real-world incidence of child sexual abuse? Surely then we would be better off permitting or facilitating such acts, even if they do satisfy the requirements of Duff or Wall’s versions of moralism? This sounds right to me, but of course it is an empirical question and we have no real evidence as of yet. All we can do for now is speculate. In the article, I speculate about three possibilities. Robotic rape and robotic child sexual abuse may: (a) significantly increase the incidence of real-world equivalents; (b) significantly reduce the incidence of real-world equivalents; or (c) have an ambiguous effect. I argue that if (a) is true, the prima facie argument is strengthened (not defeated); if (b) is true, the prima facie argument is defeated; and if (c) is true then it is either unaffected or possibly strengthened (if we accept a recent argument from Leslie Green about how we should use the criminal law to improve social morality).
The second class of defeaters is concerned with the costs of an actual criminalisation policy. How would it be policed and enforced? Would this not involve wasteful expenditure and serious encroachments on individual liberty and privacy? Would it not be overkill to throw the perpetrators of such acts in jail or to subject them to other forms of criminal punishment? I consider all these possibilities in the article and suggest various ways in which the costs may not be as significant as we first think.
So that’s it. That is my argument. There is much more detail and qualification in the full version. Just to be clear, once again, I am not advocating criminalisation. I am genuinely unsure about how we should approach this phenomenon. But I think it is an issue worth debating and I wanted to provide a (provocative) starting point for that debate.
John Danaher holds a PhD from University College Cork (Ireland) and is currently a lecturer in law at NUI Galway (Ireland). His research interests are eclectic, ranging broadly from philosophy of religion to legal theory, with particular interests in human enhancement and neuroethics. John blogs at http://philosophicaldisquisitions.blogspot.com/. You can follow him on twitter @JohnDanaher.
You argument is a bit hard to argue against, since the really problematic part is (1), which is not yours. Personally I think corrosiveness to moral character arguments are deeply problematic, since what is one person’s moral corrosion is another person’s catharsis: having experiences, updating moral values, and considering this process can very well lead in directions that are deeply counter to conventionally accepted morality but actually represent considered ethical positions (just remember Diogenes’ and the Tantrics’ antics). But there are no doubt some practices where we can empirically and intellectually agree that they make people cruder, more cruel, or otherwise inhumane.
I agree that purely virtual and computer game examples are problematic. The indirectness adds a lot of complication to what is actually going on emotionally and intellectually. I don’t buy Patridge’s argument applying more to rape than violence: one could just as well argue that people who play Hatred show a desire or moral insensitivity. I suspect there is less condemnation of violence than rape in virtualities simply because of cultural views regarding sex as more problematic than violence. And abstraction seem to strongly reduce people’s intuitions about moral corrosiveness: most strategy games have elements that would in a real world commander demand a war crimes trial, yet we rarely rail against those gamers despite many of them taking delight in cruelly winning at any cost.
I think the weak point in the Wrongness Premise is this: you assume the only two reasons to perform the acts (desire for the real world equivalent, moral insensitivity). But there could be a third: people who do not desire the real world act, and while recognizing the disapproval of the rest of society are not seeing the act as the same act as the real. Imagine a person who is a technofetishist and prefers robots to humans, yet is aware that their behavior is a nonsentient software routine and gets off on this very fact. I have no idea how common such people would be, but I have no doubt some might exist. A somewhat similar example is parts of the online furry subculture where sexual practices that would be immoral in the real world are imagined, yet the participants seem to enjoy the fact that these practices are unreal and impossible – that very unreality seems to be part of the appeal.
In the end, legislating morality is iffy because we are living in a multicultural pluralist society where people also have different levels of sophistication in how they handle virtual, imaginary and artificial worlds. Criminalizing victimless crimes is a heavy hammer applied to a subtle problem.
Hi Anders,
Thanks for the comment. I agree that premise (1) is the most problematic part of the argument. That’s why I didn’t try to defend it. One thing that attracted me to it, however, is its (seemingly) increasing popularity among criminal law theorists, to the detriment of the classic Millian position. This might be due to the perverse incentives of academic publishing (in philosophy anyway): it’s probably easier to publish something that differs from a mainstream or consensus view. But obviously those that defend it think there are good reasons to be offered on its behalf. But as I mention in the post, it may be that my argument functions as a reductio of their expansive moralistic view. That wouldn’t bother me greatly.
On Patridge’s position, you may be right. I think she, herself, would accept that certain types of virtual violence would have a problematic social meaning (she uses the example of racially motivated violence) and would fall within the scope of her argument. The violence in Hatred might be a good example of this too. But as you point out there may something more general (and unjustifiable?) about our attitude to violence.
As to your observation that there could be a third attitude toward the virtual/robotic acts, this is interesting and not something I thought about. I accept that those who are attracted to the very artificiality or non sentient nature of the interaction would warrant a different analysis. I’m not sure exactly what that analysis would be (I discuss the related issue of pure objectification in the article). But I’m also not sure how it affects my argument. It seems to me that people who are attracted to non-sentient objects as partners wouldn’t need for those partners to represent children or simulate non-consent. At the very least, I’d be more suspicious of those who claimed to be technofetishists but who also insisted upon using childbots or rapebots.
Finally, I think that morality is the only acceptable ground for legislation, but I agree that not all forms of immorality (or morally undesirable conduct) should be legislated against. Limiting the scope of legislation to activities involving a moral victim seems appropriate but again those who accept premise (1) seem to disagree.
hmm why in my mind all this article is b shit ..
i am not expert like you but you find every way (abuze ) to write uncorrect thing’s never happening
i don’t know what exact happening around you following 2 account’s
life is appaty so everything can happen .
for me dear JOHNN YOUR ARTICLE IS PROVOCATION ..
robot’s can do everything (you see technology ) make miracle even in sex (lie) ( i never use but now i will ) just stay with me …even far out in this glass ..you can help me ..
a true will be nice you John to be here ( alithino ) after we see how work robot (experiment ) what you think ?
thank you very much you make me feel up with this discuver ..
please try agen
Haven’t read the paper, just this article, but there seems to be an important question entirely overlooked. At the moment, it is perfectly legal for two or more consenting adult partners to undertake sexual intercourse as an act of simulated rape or simulated child abuse. Why should it be legal for couples (or entire dungeons of consenting adults) to explore such fantasies, but not legal for a lone masturbator, playing with a sex toy? It doesn’t seem to make an awful lot of sense, unless you’re proposing that consenting adult partners should also be forbidden from engaging in role-playing of this kind.
well because the first comment was much more for fun then for theme i have to say every parent or every person ADULT have right to do every thing they do =with norm social legal and not legal just moral ..play or fight for sex . I have to say TOO, sex be come one of the big theme for teen and child abuse ..and there every state (gv ) keep one eye close and one hier close too .there have big money in cycle
people needed to be educated to learn the real life is real” moral ”and not the money .
note ..for me robot is ( lier) (lol)
happy 2015 with out robot’s and real life .tauch and feel
ah ..may be you like robots because they don’t speak? (just fell )ahhhh
malakia …
hey where is different human with metal ..or plastic
thank you
Comments are closed.