What the Present Debate About Autonomous Weapons is Getting Wrong
Author: Michael Robillard
Many people are deeply worried about the prospect of autonomous weapons systems (AWS). Many of these worries are merely contingent, having to do with issues like unchecked proliferation or potential state abuse. Several philosophers, however, have advanced a stronger claim, arguing that there is, in principle, something morally wrong with the use of AWS independent of these more pragmatic concerns. Some have argued, explicitly or tacitly, that the use of AWS is inherently morally problematic in virtue of a so-called ‘responsibility gap’ that their use necessarily entails.
We can summarise this thesis as follows:
- In order to wage war ethically, we must be able to justly hold someone morally responsible for the harms caused in war.
- Neither the programmers of an AWS nor its military implementers could justly be held morally responsible for the battlefield harms caused by AWS.
- We could not, as a matter of conceptual possibility, hold an AWS itself morally responsible for its actions, including its actions that cause harms in war.
- Hence, a morally problematic ‘gap’ in moral responsibility is created, thereby making it impermissible to wage war through the use of AWS.
This thesis is mistaken. This is so for the simple reason that, at the end of the day, the AWS is an agent in the morally relevant sense or it isn’t.
Read More »What the Present Debate About Autonomous Weapons is Getting Wrong