Skip to content

Cyberwarfare: No New Ethics Needed

In an interesting recent essay in the Atlantic‘Is it Possible to Wage a Just Cyberwar?’ – Patrick Lin, Fritz Allhoff, and Neil Rowe argue that events such as the Stuxnet cyberattack on Iran suggest that the way we fight wars is changing, as well as the rules that govern them. It is indeed easy to see how nations may be tempted to use cyberweapons to attack anonymously, from a distance, and without the usual financial and personnel costs of conventional warfare. (See also Mariarosaria Taddeo’s interesting recent post on this blog.)

Lin et al. raise what they claim to be certain special issues arising from cyberwarfare that are not covered by standard ethics.  But I want to suggest that, though cyberwarfare certainly raises ethical issues, they aren’t as novel as Lin et al. claim.

The first topic they discuss is aggression, which according to standard just war theory is the only just cause for war. Since this is usually taken to imply danger to human life, it may be difficult to justify a military response to a cyberattack on, say, a country’s banking system. But even if it were true that aggression usually involved a danger to life, it seems to me clear that it need not. A not uncommon experience when cycling home in Oxford late at night is for groups of drunken youths to shout abuse at one from a passing car. That’s aggression, but there’s no danger to my life or even my well-being (since it doesn’t bother me in the slightest). As Lin et al. say, it may indeed be difficult to distinguish a cyberattack from, say, espionage. But such grey areas are nothing new in warfare, especially since many possibly aggressive actions are in potential violation of treaties which are themselves open to differing interpretations. I see no reason, then, to conclude that traditional military ethics would not see even an unsuccessful attempt by a state to install malicious software in its enemy’s computer system as constituting an act of war deserving an appopriate, possibly military, response.

 The next issue raised by Lin et al. is discrimination. Cyberattacks are like biological viruses (as Lin et al. themselves point out) in so far as they are likely to affect non-combatants as well as combatants. But the very mention of biological viruses itself shows that there is nothing new here. Wells in besieged towns have been poisoned by attackers for nearly three millennia.

 The authors then move to proportionality. Their understanding of this is non-standard: ‘the idea that it would be wrong to cause more harm in defending against an attack than the harm of the attack in the first place’. Usually, proportionality is understood to regulate the means chosen in the light of the value of the goal achieved. So if your cyberattack has caused me huge damage, but I could respond so as to punish you and prevent your ever launching an attack on me again by inflicting much less damage, it would be disproportionate if I were to inflict on you any damage beyond that point.

 But let’s take their conception of proportionality. One issue is that some cyberattacks may ‘go viral’ in ways not intended by those who launch them. But there is nothing novel about unintended consequences, and those who launch such attacks do them in the full knowledge of the risks they are imposing. Likewise, it has often been the case that those who unleash the dogs of war know full well that once released it may well be impossible to restrain them, and those who have been harmed find it hard to work out exactly how significant the harm in question is or may turn out to be.

 According to Lin et al., cyberwarfare poses special problems of attribution. Combatants should be identifiable, and often in cyberwarfare they will not be, which makes it harder to avoid harming non-combatants in any response. Again, there is nothing new here. Consider, for example, those many British service personnel who worked undercover in France during the Second World War. They certainly posed some risk to ordinary French citizens who might have been confused with them. The idea of Lin et al. that treaties should be drawn up requiring that cyberattacks carry a digital signature strikes me as about as plausible as the idea that these British service personnel should have been required to wear full uniform at all times.

 The authors then ask whether cyberattacks, which require people perhaps to click on some malicious link, might count as ‘perfidy’ within international law. The examples given in the 1977 Protocol added to the 1949 Geneva Convention, under article 37, are the following:

It is prohibited to kill, injure or capture an adversary by resort to perfidy. Acts inviting the confidence of an adversary to lead him to believe that he is entitled to, or is obliged to accord, protection under the rules of international law applicable in armed conflict, with intent to betray that confidence, shall constitute perfidy. The following acts are examples of perfidy:
(a) The feigning of an intent to negotiate under a flag of truce or of a surrender;
(b) The feigning of an incapacitation by wounds or sickness;
(c) The feigning of civilian, non-combatant status; and
(d) The feigning of protected status by the use of signs, emblems or uniforms of the United Nations or of neutral or other States not Parties to the conflict.

Consider for example an infected email sent by some state to its enemy’s military authorities apparently from the Red Cross. Could this not be prohibited under (d), above? I think not, since there is no feigning of protected status. An email apparently from some arms manufacturer, for example, would be equally deceptive.

 Finally, Lin et al. focus on reversibility. As they point out, some cyberattacks might be reversible, with use of, say, back-up files or decryption. But, of course, the effects of some existing attacks are reversible. Cities can be rebuilt, or orchards cleared of landmines and replanted.

 The technology of cyberwarfare is of course new. But the ethical issues it raises have been discussed for hundreds of years.


Share on

5 Comment on this post

  1. The ethical issues might be identical, but their relative weight can change drastically. Self-replicating weapons can spread at an amazing rate (the “SQL Slammer” worm had a doubling time of 8.5 seconds, and infected 90% of vulnerable machines worldwide within 10 minutes). Lack of attribution makes perfidy far more appealing and weakens accountability strongly: if someone is spreading a Red Cross virus it is unlikely that any nation or individual could ever be tied to it. Interested third parties and non-state groups feel free to join conflicts on-line from the security of their non-involved nations, yet inflict real effects on attacked locations. And as computer virus infected drone networks have demonstrated, the potential for cyberwarfare to influence systems equipped for deadly force is real.

    It seems to me that these changes in weight are so big that they might require very different approaches to maintaining rules of war than in the past. So the ethics might be the same, but the practice might have to be different.

    1. Anthony Drinkwater

      It seems to me that Lin et al are absolutely right. We must build on just war theory to prevent unjust cyber-war.
      The history of the 20th century offers us a shining example of the crucial role of just war theory in preventing aggressive wars, state terrorism, genocide and mass starvation policies. (And, of course, bringing the losers to justice.)
      We should all be grateful to Lin et al when they state that «We need not be helpless bystanders, merely watching events unfold and warfare evolve in the digital age. With hindsight and foresight, we have the power to be proactive».
      We will all surely sleep more securely in our beds tonight.

  2. Thx both. Actually I think you’re both agreeing with me that we don’t need a new ethics here, though it depends on what you mean by ‘building on’ JWT, Anthony. Anders: Speed of replication is also high, of course, in the case of standard biological viruses, and biological warfare involving the release of such viruses goes back a long way (and at least now raises the same problems of accountability). I’m no international lawyer, but I presume the rules on perfidy were drawn up largely to protect the innocent individuals (e.g. Red Cross personnel) being imitated. And mere deception is nothing new (even if there never was a Trojan horse…). Yes, there might be third-party involvement. Again, nothing new. Consider US foreign policy since WWII. So, yes, the application of rules will be somewhat new, because the technology is new. But the ethics remains the same.

  3. Anthony Drinkwater

    Yes, Roger, I agree completely with your well-reasoned argument – for which, thanks.
    I was also (heavy-handedly) suggesting that just-war theory hasn’t helped us too much in the past….and that we shouldn’t count over-much on international symposia on cyber-war.

    1. Yes, Anthony. I suspect that at least in the short term more important will be sharing intelligence, and defence strategies.

Comments are closed.