Skip to content

Cry havoc and let slip the robots of war?

Stop killer robots now, UN asks: the UN special rapporteur on extrajudicial, summary or arbitrary executions Christof Heyns has delivered a report about Lethal Autonomous Robots arguing that there should be a moratorium on the development of autonomous killing machines, at least until we can figure out the ethical and legal issues. He notes that LARs raise far-reaching concerns about the protection of life during war and peace, including whether they can comply with humanitarian and human rights law, how to device legal accountability, and “because robots should not have the power of life and death over human beings.”

Many of these issues have been discussed on this blog and elsewhere, but it is a nice comprehensive review of a number of issues brought up by the new technology. And while the machines do not yet have fully autonomous capabilities the distance to them is chillingly short: dismissing the issue as science fiction is myopic, especially given the slowness of actually reaching legal agreements. However, does it make sense to say that robots should not have the power of life and death over human beings?

It is an underlying assumption of most legal, moral and other codes that when the decision to take life or to subject people to other grave consequences is at stake, the decision-making power should be exercised by humans. The Hague Convention (IV) requires any combatant “to be commanded by a person”. The Martens Clause, a longstanding and binding rule of IHL, specifically demands the application of “the principle of humanity” in armed conflict.68 Taking humans out of the loop also risks taking humanity out of the loop.

Heyn’s point here seems twofold: there is a personhood aspect which links it to moral agency and responsibility (I doubt even a radical bioconservative would accept a weapon fired by signals from a fertilized human ovum, no matter if he thought the cell was a moral person), and a humanity aspect linked to being civilized and having a conscience.

Robots currently fail at both, the first due to their lack of intelligence and inability to change their goals, and the second by virtue of being non-human. Future robots might perhaps be able to fulfill both (there are of course some AI skeptics that think this is not possible even in principle, and intelligence alone  does not automatically resolve the moral agency or value factors) but would then essentially be like normal soldiers. Heyn follows Peter Asaro in arguing that non-human decision-making regarding the use of lethal force is inherently arbitrary, and all resulting deaths are hence arbitrary deprivations of life.

Machines lack morality and mortality, and should as a result not have life and death powers over humans. This is among the reasons landmines were banned.

Leaving aside the mortality issue (would Superman be banned from the battlefield?) the landmine case shows an interesting problem with this otherwise rhetorically convincing approach. Would landmines become better or worse if they tried to distinguish between combatants and non-combatants? It seems that adding this function would make them more acceptable, even if they don’t reach a threshold of being morally permissible. The extra autonomy reduces the arbitrariness and might improve the proportionality. Non-human decisionmaking can be non-arbitrary if implemented right: an autonomous car that refuses to run over a person in front of it is overall better than a car that just charges ahead. Machines can act as moral proxies of their programmers or users.

The main headache is where the moral buck stops. Drone warfare has already demonstrated problematic diffusion and separation of responsibility: an individual might be formally responsible for firing decisions, but they are embedded in a techno-social system that dilutes both moral intuitions about individual responsibility and the nature of combatants. Autonomous machine warfare moves things even further: now the responsibility might be diffused not only across the institution fielding the system but also the companies creating the technology. In many cases misbehavior will not be due to any discrete mistake in any part, just a confluence of behaviors and assumptions that produce an emergent undesirable result.

Can we run warfare without anybody being responsible? I do not claim to understand just war theory or the other doctrines of ethics of war. But as a computer scientist I do understand the risks of relying on systems that (1) nobody is truly responsible for, (2) cannot be properly investigated and corrected. Since presumably the internal software will be secret (since much of the military utility of autonomous systems will likely be due to their “smarts”) outside access or testing will be limited. The behavior of complex autonomous systems in contact with the real world can also be fundamentally unpredictable, which means that even perfectly self-documenting machines may not give us useful information to prevent future mis-behaviors.

Getting redress against a “mistake” appears far harder in the case of a drone killing a group of civilians than by a gunship crew; if the mistake was due to an autonomous system it is likely that the threshold will be even higher. Even from a pragmatic perspective of creating disincentives for sloppy warfare the remote and diffused responsibility insulates the prosecuting state. In fact, we are perhaps obsessing too much about the robot part and too little about the extrajudicial part of heavily automated modern warfare.

One interesting issue that has not been raised as far as I am aware is the problem of internal responsibility. If a soldier acts wrongly or contrary to orders he is held responsible; organized misbehavior is treated even more harshly. However, autonomous systems might misbehave in ways that cannot be assigned to a responsible party. Worse, diffusion of responsibility might occur inside military forces – if an autonomous weapon system decides to act, to what extent can its “superiors” be held responsible for its actions?

Since a commander can be held accountable for an autonomous human subordinate, holding a commander accountable for an autonomous robot subordinate may appear analogous. Yet traditional command responsibility is only implicated when the commander “knew or should have known that the individual planned to commit a crime yet he or she failed to take action to prevent it or did not punish the perpetrator after the fact.” 58 It will be important to establish, inter alia, whether military commanders will be in a position to understand the complex programming of LARs sufficiently well to warrant criminal liability

Perhaps the strongest reason to believe military forces will want to have a firm leash on their robots might simply be that otherwise they will risk their internal chain of responsibility.

In the case of humans we learn the “programming” and intentions by having shared human experience, by interacting with them, and by giving instructions that are expected to be understood. The problem with complex autonomous systems is that this is far less possible. Indeed, the humanity aspect mentioned at the beginning is crucial: it allows us to make reasonable inferences about what other agents are up to, but non-human systems will be quite alien and hence unpredictable.

One can make an analogy to use of attack dogs. (The Shakespeare quote alluded to in the title does actually not refer to real canines but to ordering soldiers to pillage and sow chaos, something that today would presumably be seen as a war crime). While dogs have been used militarily since antiquity, attack dogs have disappeared from normal warfare because they are vulnerable to modern firearms. But another reason is no doubt their autonomous behavior: erratic behavior made the Soviet anti-tank dog program largely a failure. The complex behavior of an animal has desirable components (e.g. skilled locomotion and perception) that cannot be separated from undesirable components (e.g. getting afraid, attacking one’s own side). Animal training serves to reduce undesirable behaviors but also to make the desirable behaviors more predictable – or, to return to Heyn and Asaro, make the decision-making less arbitrary. We hold dog owners liable for attacks their dogs do, typically by invoking negligence: they should know the likely behavior of the dog and likely risks of having them in a certain environment. Presumably a deterministic dog would have a more narrow field of liability, while an unknown and unpredictable dog would be regarded as something that should a priori be treated as dangerous.

Autonomous machines are similar in this respect to dogs. The more we can understand and “empathize” with their behavior, the better we can describe in what domains they are discriminating and non-arbitrary agents. We cannot be certain of their behavior (and it might be excessive to demand too much certainty), but if we have reasons to expect the probability of certain actions – even when stupid by human standards – the more clear the moral responsibilities of their handlers or suppliers become.

The problem is not that the machine lacks morality. The problem is that it is an imperfect proxy for somebody else’s choices, introducing enough noise and uncertainty in the chain of events that responsibility and rational response become disrupted. This implies two important limitations to the use of autonomous machines in war: the first is that their “psychology” can be inspected, tested and judged in a proper manner so that legal rules can be applied (and moral responsibility assigned in some agreed way). The second is that most participants in a theater of war – including civilians – must have enough knowledge about it that they can act accordingly. If the machine cannot distinguish a white flag from a target, or that certain patterns of behavior will trigger attacks, then that is information they should morally and legally have. If they cannot avoid acting in ways that trigger dangerous consequences the machine is an indiscriminate weapon.

Autonomy is not an end in itself, at least not for this kind of machine. It can reduce their reliability and utility. As Shakespeare wrote in Coriolanus: “Do not cry havoc, where you should but hunt with modest warrant.”

 

 

Share on

4 Comment on this post

    1. It would be nice with a moratorium, but (1) it is unlikely in practice, and (2) the time it takes to solve real ethical problems can be very, very long. It is not obvious how to decide that enough thought have been spent on the topic.

      In practice, what can be done is to get people in the international law community engaged in hashing out preliminary rule ideas, so that when the inevitable first cases arrive at The Hague there is something to build on. Even better would be if opinions existed that made people a bit more careful: after all, companies might think twice if reminded that war crimes have no statute of limitations.

  1. Joseph Savirimuthu (@J_Savim)

    Thoughtful as ever Anders! Will mull this over – by the way, what should we problematize, if the “ghost in the machine” is not an option.

    1. Much AI ethics is about the ethics of what the autonomous devices are doing or existing as, but this is largely a red herring for subhuman machines (for superhuman machines getting the right kind of motivations is a life-and-death matter for our species, on the other hand). What really matters is how autonomy leads to diffusion and avoidance of responsibility by those who actually should be held accountable.

      Even if our machines were moral agents the responsibility problem would remain. Since it is in the nature of people and organisations to try to take credit when things go well and avoid blame when things fail, it seems that in situations of unclear responsibility blame would tend to fall not on those who deserved it but on those who can be blamed. I am sure many organisations would love blameable machines, but it might not lead to the right kind of behaviour.

Comments are closed.