Skip to content

What the Present Debate About Autonomous Weapons is Getting Wrong

Author: Michael Robillard

Many people are deeply worried about the prospect of autonomous weapons systems (AWS). Many of these worries are merely contingent, having to do with issues like unchecked proliferation or potential state abuse. Several philosophers, however, have advanced a stronger claim, arguing that there is, in principle, something morally wrong with the use of AWS independent of these more pragmatic concerns. Some have argued, explicitly or tacitly, that the use of AWS is inherently morally problematic in virtue of a so-called ‘responsibility gap’ that their use necessarily entails.

We can summarise this thesis as follows:

  1. In order to wage war ethically, we must be able to justly hold someone morally responsible for the harms caused in war.
  2. Neither the programmers of an AWS nor its military implementers could justly be held morally responsible for the battlefield harms caused by AWS.
  3. We could not, as a matter of conceptual possibility, hold an AWS itself morally responsible for its actions, including its actions that cause harms in war.
  4. Hence, a morally problematic ‘gap’ in moral responsibility is created, thereby making it impermissible to wage war through the use of AWS.

This thesis is mistaken. This is so for the simple reason that, at the end of the day, the AWS is an agent in the morally relevant sense or it isn’t.

If it isn’t, then premise 2 is either false and moral responsibility falls on the persons within the causal chain to the extent that they knew or should have known the harm they were contributing to and the degree to which they could have done otherwise, or premise 2 is true but vacuous because the harm was a result of a genuine accident.

In such a case there will indeed be a gap between causal and moral responsibility but it will be a non-problematic one. If the harm is the result of a genuine accident that no rational agent could have possibly foreseen, then it will therefore be no different, morally speaking, from any other battlefield case involving unforeseeable malfunctions of weapons or equipment.

Admittedly, there might be an intransitivity between the output of the AWS and the intentions of the individual members contributing to its creation and implementation, but this intransitivity will be in virtue of the epistemic blindspots endemic to any collective action problem in general and not in virtue of the machine’s additional autonomy. If, however, it turns out that the AWS does have additional autonomy, then moral responsibility would necessarily fall on the AWS in direct proportion to the degree of the autonomy it possessed and in the identical way it would for any of the other agents within the causal chain.

This understanding of AWS is not only conceptually problematic but morally dangerous. If AWS are genuine agents, then we are doing something significantly wrong by not taking seriously the notion that AWS could indeed be appropriate bearers of not only moral responsibility but also rights and/or interests. Accordingly, turning them off, reprogramming them, or having them fight our wars for us could count as serious wrongs. If, however, AWS are not genuine agents, then moral responsibility for a potential harm an AWS might cause would fall back on the system of human programmers and implementers, and would thereby warrant closer examination of the system’s organizational and causal structure so as to prevent future harms from occurring.

The notion of a responsibility gap stifles further examination of either of these sets of important moral considerations. If we think prevention of harm is important, then whenever a harm occurs in war, its occurrence demands of us that we examine our own actions and assumptions as well as those of others in order to find an explanation for why things occurred as they did. By discerning such an explanation, we can then take measures to see that the same harm does not repeat itself. If instead we view an AWS’s actions as being radically divorced from the human decision-making that went into it, then doing so seems to absolve potential contributors to a harm of any personal responsibility to reflect upon whether or not they or their peers might be at fault.

This is not the attitude we want to be inspiring in our designers, programmers, and implementers. However, by conceiving of AWS in the problematic way that I am here challenging, this radical detachment from both responsibility and outcome is just such an attitude that the present AWS debate is fostering. Either an AWS is an agent in a morally relevant sense, or it is not. Accordingly, persons within the AWS debate must accept one of these two disjuncts and then accept the conceptual and moral entailments that follow. To conceive of an AWS as anything else is not only conceptually incoherent, but also morally dangerous, since it shifts focus off of the only two possible loci where responsibility could conceivably obtain and instead places the debate in a peculiar in-between space that is ultimately unproductive.

I have made the argument elsewhere that we should morally appraise AWS as we would any other social institution regardless of medium. Whether or not the set of collective decision procedures is instantiated in the form of computer software or in the form of an office building full of workers is of no importance from a moral point of view. That being said, I’m more than happy to concede that I could be wrong about the AWS’s metaphysical status, and that something about the emergent features of the machine’s learning algorithms could make it such that its decisions were truly its own. But once again, if that were the case, then the AWS would then count as a genuine agent and not as some weird moral chimera where all of our usual thinking about agency and responsibility suddenly breaks down.

If the AWS is an agent in the morally relevant sense (i.e. able to guide itself in response to moral and epistemic reasons, being capable of being a bearer of moral responsibility, and being capable of being a bearer of rights and/or interests), then its creation would mean that the promise of strong A.I. had finally arrived and that knowledge of our metaphysical and moral universe had greatly expanded. This would entail, however, a world of much greater moral demandingness than at present. If however, the alternative account of AWS that I have suggested is correct, then it would mean that we would have to come to terms with a sobering truth that is at once harsh but also empowering; namely, that there are, in fact, no machines who might one day conspire against us, no automata who might suddenly decide to revolt, in other words, no killer robots, just us.


Large portions of this post have been taken from the full argument I present in “No Such Thing as Killer Robots” Journal of Applied Philosophy, June 2017. For similar views, see Susanne Burri, ‘In Defense of Killer Robots.’ in Ryan Jenkins, Michael Robillard, and Bradley Jay Strawser (eds.) Who Should Die: Liability and Killing in War (Oxford: Oxford University Press, forthcoming) and Steve Kershnar, ‘Autonomous Weapons Pose No Moral Problem’, in Bradley Jay Strawser (ed.) Killing by Remote Control: The Ethics of an Unmanned Military (New York: Oxford University Press, 2013) pp. 229-45

Share on

1 Comment on this post

  1. I think your discussion would have highly benefitted from examining two key concepts you don’t really talk about, but which underlie your entire reasoning.

    1. How autonomous?
    It is clear that premise 2 fails for drones and more generally Unmanned Aerial Vehicles currently utilized in Western armies, whose autonomy reduces to suggesting targets and engaging pre-combat manoeuvers and other routines that do not involve, for example, selecting a target a firing missiles. But it it much less clear that premise 2 fails for AWS which, for example, whose autonomy lies in a built-in decision mechanism, for example based on machine learning algorithms embedded in massive “artificial neural networks”, allowing the AWS to select targets *and* to fire without the explicit approval of a pilot or any competent authority. It is easy to imagine contexts where AWSs would have this kind of built-in decision mechanism, for exemple emergency cases with too little time for humans to contribute to the decision. This type of autonomy based on a two-tier process (1. programming the AWS to learn; 2. programming the AWS to fire given how the situation at hand relates to what’s been learnt), with two necessary and jointly sufficient yet individually insufficient conditions, involving a pretty “stretched out” line of people, makes it very difficult to identify those to morally appraise, because the connection between the collective actions and the outcome is very thin (and you could even argue that whoever took the decision to put the AWS in a situation where she or he should have known would trigger such-and-such use of violence should also be part of the moral assessment; this might mean morally assessing part of the whole military hierarchy).

    2. Individual or collective responsibility?
    Suppose you’ve solved the issues above; how then do you apply your moral assessment to the parties involved? Do you treat them collectively? Individually? Some collectively, some individually? Law-makers are already pulling their heads to morally assess companies involved in clearly blameworthy actions, so it’s very likely that they will pull even more heads in the case at hand, because the impact of the AWS’s autonomy on the programmers and implementers’s responsibility will be very difficult to measure, and may be context-dependent to a degree where all hopes of passing a principled judgement (i.e. applicable to similar situations in general) are lost.

    So even though you maybe right in your claim that there is no metaphysical responsibility gap, you haven’t ruled out the possibility of an epistemic and practical responsibility gap, which probably is much more relevant to the public debate than the metaphysical one.

Comments are closed.