Andrew Hessel, Marc Goodman and Steven Kotler sketches in an article in The Atlantic a not-too-far future when the combination of cheap bioengineering, synthetic biology and crowdsourcing of problem solving allows not just personalised medicine, but also personalised biowarfare. They dramatize it by showing how this could be used to attack the US president, but that is mostly for effect: this kind of technology could in principle be targeted at anyone or any group as long as there existed someone who had a reason to use it and the resources to pay for it. The Secret Service looks like it is aware of the problem and does its best to swipe away traces of the President, but it is hard to imagine this to be perfect, doable for old DNA left behind years ago, or applied by all potential targets. In fact, it looks like the US government is keen on collecting not just biometric data, but DNA from foreign potentates. They might be friends right now, but who knows in ten years…
In an interesting recent essay in the Atlantic – ‘Is it Possible to Wage a Just Cyberwar?’ – Patrick Lin, Fritz Allhoff, and Neil Rowe argue that events such as the Stuxnet cyberattack on Iran suggest that the way we fight wars is changing, as well as the rules that govern them. It is indeed easy to see how nations may be tempted to use cyberweapons to attack anonymously, from a distance, and without the usual financial and personnel costs of conventional warfare. (See also Mariarosaria Taddeo’s interesting recent post on this blog.)
Lin et al. raise what they claim to be certain special issues arising from cyberwarfare that are not covered by standard ethics. But I want to suggest that, though cyberwarfare certainly raises ethical issues, they aren’t as novel as Lin et al. claim.
The first topic they discuss is aggression, which according to standard just war theory is the only just cause for war. Since this is usually taken to imply danger to human life, it may be difficult to justify a military response to a cyberattack on, say, a country’s banking system. But even if it were true that aggression usually involved a danger to life, it seems to me clear that it need not. A not uncommon experience when cycling home in Oxford late at night is for groups of drunken youths to shout abuse at one from a passing car. That’s aggression, but there’s no danger to my life or even my well-being (since it doesn’t bother me in the slightest). As Lin et al. say, it may indeed be difficult to distinguish a cyberattack from, say, espionage. But such grey areas are nothing new in warfare, especially since many possibly aggressive actions are in potential violation of treaties which are themselves open to differing interpretations. I see no reason, then, to conclude that traditional military ethics would not see even an unsuccessful attempt by a state to install malicious software in its enemy’s computer system as constituting an act of war deserving an appopriate, possibly military, response.
The next issue raised by Lin et al. is discrimination. Cyberattacks are like biological viruses (as Lin et al. themselves point out) in so far as they are likely to affect non-combatants as well as combatants. But the very mention of biological viruses itself shows that there is nothing new here. Wells in besieged towns have been poisoned by attackers for nearly three millennia.
The authors then move to proportionality. Their understanding of this is non-standard: ‘the idea that it would be wrong to cause more harm in defending against an attack than the harm of the attack in the first place’. Usually, proportionality is understood to regulate the means chosen in the light of the value of the goal achieved. So if your cyberattack has caused me huge damage, but I could respond so as to punish you and prevent your ever launching an attack on me again by inflicting much less damage, it would be disproportionate if I were to inflict on you any damage beyond that point.
But let’s take their conception of proportionality. One issue is that some cyberattacks may ‘go viral’ in ways not intended by those who launch them. But there is nothing novel about unintended consequences, and those who launch such attacks do them in the full knowledge of the risks they are imposing. Likewise, it has often been the case that those who unleash the dogs of war know full well that once released it may well be impossible to restrain them, and those who have been harmed find it hard to work out exactly how significant the harm in question is or may turn out to be.
According to Lin et al., cyberwarfare poses special problems of attribution. Combatants should be identifiable, and often in cyberwarfare they will not be, which makes it harder to avoid harming non-combatants in any response. Again, there is nothing new here. Consider, for example, those many British service personnel who worked undercover in France during the Second World War. They certainly posed some risk to ordinary French citizens who might have been confused with them. The idea of Lin et al. that treaties should be drawn up requiring that cyberattacks carry a digital signature strikes me as about as plausible as the idea that these British service personnel should have been required to wear full uniform at all times.
The authors then ask whether cyberattacks, which require people perhaps to click on some malicious link, might count as ‘perfidy’ within international law. The examples given in the 1977 Protocol added to the 1949 Geneva Convention, under article 37, are the following:
It is prohibited to kill, injure or capture an adversary by resort to perfidy. Acts inviting the confidence of an adversary to lead him to believe that he is entitled to, or is obliged to accord, protection under the rules of international law applicable in armed conflict, with intent to betray that confidence, shall constitute perfidy. The following acts are examples of perfidy:
(a) The feigning of an intent to negotiate under a flag of truce or of a surrender;
(b) The feigning of an incapacitation by wounds or sickness;
(c) The feigning of civilian, non-combatant status; and
(d) The feigning of protected status by the use of signs, emblems or uniforms of the United Nations or of neutral or other States not Parties to the conflict.
Consider for example an infected email sent by some state to its enemy’s military authorities apparently from the Red Cross. Could this not be prohibited under (d), above? I think not, since there is no feigning of protected status. An email apparently from some arms manufacturer, for example, would be equally deceptive.
Finally, Lin et al. focus on reversibility. As they point out, some cyberattacks might be reversible, with use of, say, back-up files or decryption. But, of course, the effects of some existing attacks are reversible. Cities can be rebuilt, or orchards cleared of landmines and replanted.
The technology of cyberwarfare is of course new. But the ethical issues it raises have been discussed for hundreds of years.