Just War Theory and Cyber Warfare
Lin’s, Allhoff’s and Rowe’s article in yesterday’s “The Atlantic” could have not been more timely. In the previous week a new cyber weapons has been ‘discovered’, the Flame; the New York Times reported the story behind one of the most famous cyber attacks, i.e. Stuxnet, confirming everyone’s suspicion that both the US and Israel had launched the attack; the NATO Cooperative Cyber Defense Centre of Excellence inaugurates The Fourth International Conference on Cyber Conflict, hosting militaries, policy makers, politicians, experts in law and also an ethicist (myself) to discuss and share ideas, data and ‘unclassified’ material about cyber warfare. It turns out that cyber warfare really is today’s hot topic, or at least this week’s hot topic.
In the article, the authors stress the importance of the ethical implications of cyber warfare and the need for policies and regulations that would guarantee a just cyber warfare, meaning warfare that respects the principles of Just War Theory tradition, as they name them: aggression, discrimination, proportionality, attribution, treacherous deceit, and a long-lasting peace.
The problem is that applying Just War Theory to the case of cyber warfare is not straightforward at all. Somehow, cyber warfare slips through the net of Just War Theory and it proves to be quite difficult to regulate using old, traditional ethical principles.
Just War Theory is an ethical theory concerned with the preservation of and respect for human life and liberty; it is all about limiting casualties and physical damage. It is an ethical theory designed to keep in mind classic warfare and its tangible targets. In the grand scheme of Just War Theory there is no place for informational infrastructures, data and information. In other words, there is no concern for the targets of cyber warfare.
One may then wonder the reason why we should bother with ethics at all, when it comes to warfare that, in most circumstances, is waged using a piece of code against some intangible objects, without directly causing casualties or physical damage.
For once, philosophers and ethicists are in the right place to provide a good answer, reminding themselves, the law and the policy makers that those intangible targets are something upon which individuals and societies of the information age depend. Just consider how much of the GPD of several European countries rests mainly on such intangible goods (Floridi’s article).
If this was not enough, as members of information societies we actually attribute a moral value to informational infrastructures, the data and information that they store. The importance we attribute to online privacy and anonymity provides a good example in this respect. So if there is a war that is targeting precisely such information-related goods, it has to be a fair warfare because, despite being intangible, those goods are worthwhile.
The authors are right in pointing to Just War Theory as the ethical framework to be taken into consideration. As a matter of fact, Just War Theory offers a set of principles for just war that remain valid for any type of warfare that may be waged, be it classic or cyber. It is also a fact, that there is a hiatus between the ontology of the entities involved in traditional warfare and of those involved in cyberwar. This is because Just War Theory rests on an anthropocentric ontology, i.e. it is concerned with respect for human rights and disregards non-human entities as part of the moral discourse, and for this reason it does not provide sufficient means for addressing the case for cyber warfare. It is this hiatus that presents the need for philosophical and ethical work to fill it and provide new grounds to ensure just cyber warfare.
Another issue is to convince both policy and law-makers that the gap in current policies and regulations that everyone is so concerned about can only be bridged taking into account the moral stance of informational objects. This may be just the next task to be accomplished.
The unexpected turn: from the democratic Internet to the Panopticon
In the last ten years ICTs (information and communication technologies) have been increasingly used by militaries both to develop new weapons and to improve communication and propaganda campaigns. So much so that military often refers to ‘information’ as the fifth dimension of warfare in addition to land, sea, air and space. Given this scenario does not surprise that the Pentagon would invest part of its resources to develop a new program called Social Media in Strategic Communication (SMISC) allegedly to ‘to get better at both detecting and conducting propaganda campaigns on social media’ as reported a few days ago on Wired (http://www.wired.com/dangerroom/2011/07/darpa-wants-social-media-sensor-for-propaganda-ops/on ).
The program has two main functions, it will support the military in their propaganda and it will allows for identifying the “formation, development and spread of ideas and concepts (memes)” in social groups. Namely, the program will be able to spot on the web rumours or emerging themes, figure out whether such themes are randomly coming up or are the results of a propaganda operation by ‘adversary’ individuals or group. To any one even also slightly concerned with ethical problems all this rings more than one bell.
SMISC is one more surveillance tool empowered by ICTs. We all know that the information that we put on the web, on social networks or on websites, even our queries on search engines, is mined and analysed for second purposes. But it becomes more scaring when the analysis is done by government agencies, as in this case the Internet becomes a tool for surveillance. A surveillance, which may go far behind the one we may be already accustomed to. The unexpected turn is that the Internet, which has been for long time considered a ‘democratic place’, where anyone could express his/her thoughts and act more or less freely, could become the next Panopticon and provide the tool for monitoring both a wide range of information, from the newspaper one reads in the morning to one’s political commitment, and a vast amount of people, virtually all the web users.
This can have serious consequences. Consider the case of the recent riots and revolutions in middle East. In most cases, the Internet was the media through which people could talk about the political situation of their countries, organise protests and also describe their conditions to other people all over the world. What would have happened if middle East government could have spot the protest movements in their early days? Until now, governments, like the Egyptian one, have shut down the web in their countries to limit the circulation of information about what was happening; but the development of SMISC shows that there is a further step that could be soon taken, that is the proactive use of the Internet by governments for surveillance purposes. In this case, as the technologies for data mining evolves, the Internet may represent the most powerful surveillance/intelligence tool developed so far. If so, it seems that it is time to start worry about the rights of the Internet users and to find out ways of protecting them.
Robots as companion for human beings, a reflection on the information revolution
At the beginning of this month the NYT reported a highly interesting article on the use of robots in daily practises and activities, including a doctor using a robot to aid her in checking the health condition of a patient or a manager attending an office meeting using a robot avatar, complete with screen, camera, microphone and speakers.
In reading this article, the first consideration that arose concerned how ICTs are changing our habits and how the augmented reality is no longer a scenario restricted to the world of science fiction. The emergence of real-world augmented reality is an immensely exciting and suggestive development, but there is also a second and perhaps more interesting consideration.
Reading of these ‘tele-operated’ robots led me to think about other forms of robot, the companion devices, that have been developed in the last decade and which seem to become more popular and widely diffused all the time. Consider for example the Wi-Fi embedded Nabaztag (http://en.wikipedia.org/wiki/Nabaztag), the PARO bay therapeutic robot (http://www.parorobots.com/), or KASPAR (http://kaspar.feis.herts.ac.uk/); not to mention the famous AIBO dog (http://support.sony-europe.com/aibo/index.asp).
These robots represent a new form of artificial agent; they are able to perform both reactively and proactively while interacting with the environment and with human beings. Moreover they are, or will be, deployed to perform socially relevant tasks, as nursing children and elderly people (http://www.sciencedaily.com/releases/2010/03/100324184558.htm) or simply to provide companionship to human agents
It is not difficult to imagine that in forty years, once the young generation of today has grown old, robots such as AIBO or KASPAR will be considered commodities and may also hold fundamental social roles. They will not be a medium to allow a remote user to be tele-present somewhere else, they will be autonomous agents, able to function reactively and proactively and share in our own environment.
These robots are interesting when considered from a technological and social perspective. For one thing, they show that we have the technology to design and build devices of this level of sophistication and that we are moving toward a new kind of social interaction. However the most interesting aspect of this is revealed when considering the robots from an ethical perspective.
Here is the second consideration. The dissemination of ICTs determined a revolution, which concerns
more than simply the development of new technology; this is the Fourth Revolution (http://www.philosophyofinformation.net/publications/pdf/tisip.pdf). Such a revolution affects the place of human beings in the universe: following it, it will be understood that there are other agents able to interact with us and in our environment.
As philosophers and ethicists we need to investigate such changes, to address the issues related to the moral status of these new agents and to determine a suitable ethical framework to provide the guidelines for ethical use of and interaction
with these agents.
Recent Comments