Sex and death among the robots: when should we campaign to ban robots?
Today, I noticed two news stories: BBC future reported about the Korean work on killer robots (autonomous gun turrets that can identify, track and attack) and BBC news reported on the formation of a campaign to ban sex robots, clearly mirrored on the existing campaign to stop killer robots.
Much of the robot discourse is of course just airing hopes and fears about the future, projected onto futuristic devices. But robots are also real things increasingly used for real applications, potentially posing actual threats and affecting social norms. When does it make sense to start a campaign to stop the development of robots that do X?
I have earlier posted on this blog about the ethics of military robots. Christof Heyns said: “Machines lack morality and mortality, and should as a result not have life and death powers over humans.” While I think this oversimplifies things, there are deeply troubling problems with proportionality, just war, diffusion of responsibility, and whether military interventions might become more attractive if there are no body bags and the army is guaranteed to be loyal to the political rulers.
The Campaign Against Sex Robots is based on the view that having sex robots objectifies women and children (for some reason not men), that the sex robot discourse is based on a prostitution analogy that regards the (human) prostitute as a thing to be used, that sex robots will reduce human empathy and reinforce power relations of inequality and violence, and that sex robots will not reduce sexual exploitation. There has been arguments on this blog for criminalizing rape of robots and sex with child-shaped robots, based on the act being morally wrong (due to desire for the real thing or unacceptable moral insensitivity).
Many of the claims of the campaign seem to be semi-empirical, and I am not convinced they are supported by data. There is a great deal of similarity to anti-enhancement arguments that self-assuredly claim that if we accept enhancement in some domain various bad psychological, moral and social developments will ensue. Yet the actual experiences do not seem to fit these claims (to which the adherents of course respond “just wait and see!”). However, I think getting into a snowball fight using abstracts and survey data would miss the core issue.
An important difference is that the moral harm of a killer robot killing a person is direct: unless certain complex ethical considerations makes it a justified killing it is immoral, and even a morally justified killing is unequivocally bad for the victim. The harm from sex with a sexbot is far less clear: there does not seem to be a victim. The purported harm is to social norms in general and maybe the psychology of the user.
The fundamental difference between the campaigns is that the killer robot one tries to prevent people from getting killed by machines, while the sex robot on tries to get people to be nicer to each other – via the intermediary of banning technology that could lead to bad social changes. It is indirect, and does not deal with robots per se. It could in principle be about any technology or social change with the same social effects: at least in theory (say) books, computer games, dating apps, or acceptance of swinger parties could cause the same thing.
This indirectness is the problem I have with the sex campaign: even if one bought the arguments that sex robots are likely to induce bad social changes, these changes are occurring because of individual decisions and beliefs, as well as sociocultural institutions. There are many other levers that could be pulled to improve the situation of sex workers, women, or people’s attitudes to each other. Some of these levers may be far more powerful than a technology ban. Conversely, even a successful ban of sex robots may fail to reach the desired goal because of other technologies or intermediaries causing the undesired social changes. By acting against a possible contributor rather than the bad thing itself, effort is wasted.
If one were to argue that sex robots are inherently immoral (perhaps due to human dignity concerns, because sex is just for reproduction, or it must happen between cognitive and existential equals) one can still argue for a ban. But this does not seem to be the current motivation of the campaign.
Banning killer robots doesn’t have the same problem. While there are other ways of reducing the killing of humans – better governance, peacekeeping missions, diplomacy – the bad of robots killing humans is directly reduced by banning killer robots. The campaign aims closer to where the actual moral harm seems to reside.
What does this tell us about future possible campaigns to ban certain robots on moral grounds? In the future there will likely not be a shortage of outrage against robots upsetting vested interests, often couched in the language of them corrupting our moral fibre or doing moral harm. But I predict for most of these cases the indirect social/moral harms will be far smaller than the actual benefits. The AutoDoc may not care for its patients, but the good of cheap automated healing is worth enormous human welfare: caring is nice and obligatory among humans, but we value getting healthy too – few go to the hospital for the nurses’ sake. Autonomous cars will kill people from time to time, but they will be safer than humans once they are allowed to dominate the road: the moral bad of having human killed by software decisions is outweighed by a large number of lives (and human consciences) saved.
It seems that the key issue is whether the bad act will occur less if the robot is banned, and whether other remedies are more effective. We clearly should want to ban torture or crime robots, since such things likely would enable more torture and crime. The bad act of instrumentalizing other moral agents might be encouraged if we are used to being surrounded by obedient robots and start treating each other like that, but banning robots at most reduces the temptation (at a significant cost of robot benefits) while campaigning for recognizing the importance of moral agents appears far more effective.
I did sign FLI’s open letter advocating a ban on autonomous weapons. I would not sign a similar letter arguing for a ban on sex robots.