Skip to content

Four… three… two… one… I am now authorized to use physical force!

Noel Sharkey, Professor of Artificial
Intelligence and Robotics at the University of Sheffield, warns that we are well on our way to get military killer robots that have great autonomy in applying deadly force. Current military "robots" such as UAVs have limited autonomy. They are
remotely controlled by humans, but increasingly given ability to
patrol, find targets and attack on their own. It would be a natural
progression to give them increasingly free reign, with the humans
merely granting permission – but in an active situation human reactions
might be too slow. Will the current convention that a properly trained
military human operator has to make the final decision still hold true
in the future?

Currently humans make deadly mistakes when remote-controlling robots, so having a human in the loop does not necessarily save lives. It merely ensures that somebody is responsible. Ideally a trained operator will have a sense of judgment that minimizes the number of mistakes but it would be absurd to think they could be removed.

The idea that humans always will be in the loop is problematic. Not because it cannot be legislated, but because the human may be reduced to rubberstamping machine decisions. In a complex theater of war with information overload, rapid responses and more machines than humans the amount of human attention, intelligence and moral oversight that can be devoted to any particular incident will be limited. Even if a human is granting permission it is likely that they will come to trust the judgment of the machine; they may make a formal decision, but they are not making a morally considered decision.

The only way of preventing this would be to have conventions ensuring that human controllers would always be given ample room for considering the full ramifications of the situation. This would likely have a negative effect on operative tempo, and hence strong institutional forces would push towards even more autonomy of the machines to make up for the slow human decisions, and a pressure towards granting permissions without undue delays. In the end, it is likely that this may put the operators in a bind: not acting fast enough would be insubordination, not considering well enough would be breaking the conventions.

It is hence likely that at least some lethal decisions will be effectively taken by robots, even when this is not the official or desirable situation. The real victims of killer robots may be soldiers forced to become responsible for actions that are not their own. 

The main problem according to professor Sharkey is that robots are not
very discriminatory: it is hard enough for humans to distinguish
between combatants and civilians. Hence the robots would likely make
occasional deadly mistakes. As Sharkey sees it, developers may have
oversold their ability to program the laws of war into the machines. Current robots are not moral subjects in any sense: they are too
simple-minded to comprehend anything, let alone moral principles, and
cannot be expected to follow moral standards in the sense a human
could. At the same time they are autonomous in the sense that they
sense their environment and can act on it according to internal states
and rules that may be both complex and even unknowable for an outside
observer. A simple program can produce very unexpected behaviors. Advanced robots are even more likely to be able to surprise, and the correctness of their behavior under all circumstances will be impossible to validate.

A
human responsible for a group of robots is in many ways equivalent to a
human responsible for a group of animals. He is not in total control
over their actions, but he is supposed to have a degree of control over
them and he is responsible for what commands he gives them (including
the decision to activate them) and foreseeable consequences of these. It would be a form of command responsibility, where the de facto command over the robots would make the person legally and morally accountable. But depending on the programming of the robots it might be hard to maintain responsibility.

A simple example of the problem might be whether to attack people
displaying a flag of truce. Attacking is a war crime, and presumably
releasing robots that would attack would also be regarded as a war
crime. But a flag of truce is an ill-defined category: it could be a
t-shirt or handkerchief, the white could be muddied. If robots are
programmed to err on the side of caution it is likely that the enemy
would soon figure out how to dress in order not to be attacked, without
actually committing the crime of perfidy. If the robots are programmed to err on the other side they will cause war crimes to be comitted. And, as argued above, leaving the decision up to a controlling human might easily make him formally responsible in situations where he is not actually responsible.

Matthew Knowles, a representative from the aerospace, defence and security trade association
SBAC said about Sharkey's concerns: "Scare stories such as this are not helpful contributions to
what is an important debate." But recognizing the serious problems posed by weapons systems where human and machine autonomy mix, the institutional pressures that may force soldiers to take formal responsibility when they are not truly in control, the risk of overestimating the reliability of machine (and human) decisionmaking as well as the profound worry many feel over the increasing automatization of warfare, that is a necessary starting point for the important debate. Defining proper command responsibility for human-machine systems is not going to be easy. But it is hardly beyond ethics, law and international agreement.

Share on