Mariarosaria Taddeo

Just War Theory and Cyber Warfare

Lin’s, Allhoff’s and Rowe’s article in yesterday’s “The Atlantic” could have not been more timely. In the previous week a new cyber weapons has been ‘discovered’, the Flame; the New York Times reported the story behind one of the most famous cyber attacks, i.e. Stuxnet, confirming everyone’s suspicion that both the US and Israel had launched the attack; the NATO Cooperative Cyber Defense Centre of Excellence inaugurates The Fourth International Conference on Cyber Conflict, hosting militaries, policy makers, politicians, experts in law and also an ethicist (myself) to discuss and share ideas, data and ‘unclassified’ material about cyber warfare. It turns out that cyber warfare really is today’s hot topic, or at least this week’s hot topic.
In the article, the authors stress the importance of the ethical implications of cyber warfare and the need for policies and regulations that would guarantee a just cyber warfare, meaning warfare that respects the principles of Just War Theory tradition, as they name them: aggression, discrimination, proportionality, attribution, treacherous deceit, and a long-lasting peace.
The problem is that applying Just War Theory to the case of cyber warfare is not straightforward at all. Somehow, cyber warfare slips through the net of Just War Theory and it proves to be quite difficult to regulate using old, traditional ethical principles.
Just War Theory is an ethical theory concerned with the preservation of and respect for human life and liberty; it is all about limiting casualties and physical damage. It is an ethical theory designed to keep in mind classic warfare and its tangible targets. In the grand scheme of Just War Theory there is no place for informational infrastructures, data and information. In other words, there is no concern for the targets of cyber warfare.
One may then wonder the reason why we should bother with ethics at all, when it comes to warfare that, in most circumstances, is waged using a piece of code against some intangible objects, without directly causing casualties or physical damage.
For once, philosophers and ethicists are in the right place to provide a good answer, reminding themselves, the law and the policy makers that those intangible targets are something upon which individuals and societies of the information age depend. Just consider how much of the GPD of several European countries rests mainly on such intangible goods (Floridi’s article).
If this was not enough, as members of information societies we actually attribute a moral value to informational infrastructures, the data and information that they store. The importance we attribute to online privacy and anonymity provides a good example in this respect. So if there is a war that is targeting precisely such information-related goods, it has to be a fair warfare because, despite being intangible, those goods are worthwhile.
The authors are right in pointing to Just War Theory as the ethical framework to be taken into consideration. As a matter of fact, Just War Theory offers a set of principles for just war that remain valid for any type of warfare that may be waged, be it classic or cyber. It is also a fact, that there is a hiatus between the ontology of the entities involved in traditional warfare and of those involved in cyberwar. This is because Just War Theory rests on an anthropocentric ontology, i.e. it is concerned with respect for human rights and disregards non-human entities as part of the moral discourse, and for this reason it does not provide sufficient means for addressing the case for cyber warfare. It is this hiatus that presents the need for philosophical and ethical work to fill it and provide new grounds to ensure just cyber warfare.
Another issue is to convince both policy and law-makers that the gap in current policies and regulations that everyone is so concerned about can only be bridged taking into account the moral stance of informational objects. This may be just the next task to be accomplished.

The unexpected turn: from the democratic Internet to the Panopticon

In the last ten years ICTs (information and communication technologies) have been increasingly used by militaries both to develop new weapons and to improve communication and propaganda campaigns. So much so that military often refers to ‘information’ as the fifth dimension of warfare in addition to land, sea, air and space. Given this scenario does not surprise that the Pentagon would invest part of its resources to develop a new program called Social Media in Strategic Communication (SMISC) allegedly to ‘to get better at both detecting and conducting propaganda campaigns on social media’ as reported a few days ago on Wired (http://www.wired.com/dangerroom/2011/07/darpa-wants-social-media-sensor-for-propaganda-ops/on ).

The program has two main functions, it will support the military in their propaganda and it will allows for identifying the “formation, development and spread of ideas and concepts (memes)” in social groups. Namely, the program will be able to spot on the web rumours or emerging themes, figure out whether such themes are randomly coming up or are the results of a propaganda operation by ‘adversary’ individuals or group. To any one even also slightly concerned with ethical problems all this rings more than one bell.

SMISC is one more surveillance tool empowered by ICTs. We all know that the information that we put on the web, on social networks or on websites, even our queries on search engines, is mined and analysed for second purposes. But it becomes more scaring when the analysis is done by government agencies, as in this case the Internet becomes a tool for surveillance. A surveillance, which may go far behind the one we may be already accustomed to. The unexpected turn is that the Internet, which has been for long time considered a ‘democratic place’, where anyone could express his/her thoughts and act more or less freely, could become the next Panopticon and provide the tool for monitoring both a wide range of information, from the newspaper one reads in the morning to one’s political commitment, and a vast amount of people, virtually all the web users.

This can have serious consequences. Consider the case of the recent riots and revolutions in middle East. In most cases, the Internet was the media through which people could talk about the political situation of their countries, organise protests and also describe their conditions to other people all over the world. What would have happened if middle East government could have spot the protest movements in their early days? Until now, governments, like the Egyptian one, have shut down the web in their countries to limit the circulation of information about what was happening; but the development of SMISC shows that there is a further step that could be soon taken, that is the proactive use of the Internet by governments for surveillance purposes. In this case, as the technologies for data mining evolves, the Internet may represent the most powerful surveillance/intelligence tool developed so far. If so, it seems that it is time to start worry about the rights of the Internet users and to find out ways of protecting them.

Individual privacy and the conduct of web users

From October the 12th to the 14th London will host the RSA conference, which gathers together information security experts from across the world to discuss the most pertinent emerging issues in information security.

The safeguarding of users’ privacy is one of the most important and frequently discussed issues in the field of information security, and is therefore a major topic of the RSA conference. In particular, BT's chief technology officer, Mr Schneier expressed his concern about how web companies deal with users’ privacy; he stressed the need for regulations and laws to create or encourage improved management of this issue (http://www.bbc.co.uk/news/technology-11524041).

Continue reading

Robots as companion for human beings, a reflection on the information revolution

At the beginning of this month the NYT reported a highly interesting article on the use of robots in daily practises and activities, including a doctor using a robot to aid her in checking the health condition of a patient or a manager attending an office meeting using a robot avatar, complete with screen, camera, microphone and speakers.

In reading this article, the first consideration that arose concerned how ICTs are changing our habits and how the augmented reality is no longer a scenario restricted to the world of science fiction. The emergence of real-world augmented reality is an immensely exciting and suggestive development, but there is also a second and perhaps more interesting consideration.

Reading of these ‘tele-operated’ robots led me to think about other forms of robot, the companion devices, that have been developed in the last decade and which seem to become more popular and widely diffused all the time. Consider for example the Wi-Fi embedded Nabaztag (http://en.wikipedia.org/wiki/Nabaztag), the PARO bay therapeutic robot (http://www.parorobots.com/), or KASPAR (http://kaspar.feis.herts.ac.uk/); not to mention the famous AIBO dog (http://support.sony-europe.com/aibo/index.asp).

These robots represent a new form of artificial agent; they are able to perform both reactively and proactively while interacting with the environment and with human beings. Moreover they are, or will be, deployed to perform socially relevant tasks, as nursing children and elderly people (http://www.sciencedaily.com/releases/2010/03/100324184558.htm) or simply to provide companionship to human agents

It is not difficult to imagine that in forty years, once the young generation of today has grown old, robots such as AIBO or KASPAR will be considered commodities and may also hold fundamental social roles. They will not be a medium to allow a remote user to be tele-present somewhere else, they will be autonomous agents, able to function reactively and proactively and share in our own environment.

These robots are interesting when considered from a technological and social perspective. For one thing, they show that we have the technology to design and build devices of this level of sophistication and that we are moving toward a new kind of social interaction. However the most interesting aspect of this is revealed when considering the robots from an ethical perspective.

Here is the second consideration. The dissemination of ICTs determined a revolution, which concerns
more than simply the development of new technology; this is the Fourth Revolution (http://www.philosophyofinformation.net/publications/pdf/tisip.pdf). Such a revolution affects the place of human beings in the universe: following it, it will be understood that there are other agents able to interact with us and in our environment.

As philosophers and ethicists we need to investigate such changes, to address the issues related to the moral status of these new agents and to determine a suitable ethical framework to provide the guidelines for ethical use of and interaction
with these agents.

Off- and on-line, an outdated distinction

Almost a month ago the websites
of several newspapers and magazines (http://www.myfoxtwincities.com/dpps/news/who-is-cyber-pranking-victim-jessi-slaughter-dpgoha-20100720-fc_8747638)
reported the case of a young girl (11 years old) from Florida, known as Jessi
Slaughter, who had been posting videos online (http://gawker.com/5589103/how-the-internet-beat-up-an-11+year+old-girl?skyline=true&s=i),
which had been picked up by Stickydrama, a social networking tabloid website. One
of the videos was a rather childish tirade about how she was better and
prettier than everyone else. This video began an escalation, largely
masterminded by the Anonymous posters on 4chan’s notorious /b/ image board.
Hateful comments posted on online forums were followed by the publication of
Jessi’s real name, address and phone numbers. Bogus pizza deliveries showed up at
Jessi’s actual address and prank calls transformed rapidly into explicit death threats.

Jessi
Slaughter responded with a retaliation video, in which disturbing language and
hateful comments towards other Internet users, who she
calls ‘haters’, were not spared. The video and the hateful comments gained more
stream when, in response to the threat to the young girl, a new video was
posted, where Jessi and her father addressed the ‘haters’ with other threats.
Eventually, after even more hateful comments and pranks, the girl decided to do
take down her YouTube account and, with it, her videos.

This isn't the
first case of the so-called cyber-prank, as it has happened with others, such
as ‘Boxxybabe’ and ‘Lexibee’. They too have since disappeared, but not before
being the target of pranks and spoofs from the Internet. One of the most
shocking cases is that of Megan Meier (http://www.nytimes.com/2007/11/28/us/28hoax.html).
A 13 yeast old girl, who committed suicide in 2006 after being hoaxed by her
cyber-boyfriend, who turned out to be a neighbor living few houses away from
her, who had been making fun of the girl for over a month.

Cases like
this have been widely discussed in the literature of Computer Ethics as they
bring to the fore the issue of on-line trust, and more generally of on-line
social interactions. Cyber-pranks seems to present the most evident demonstration
that one cannot trust anyone in the
online environment, as the environmental condition of the Internet, i.e. anonymity
and tele-presence among others, do not allow for the persecution of illegal or
immoral behaviours. However, this is only part of the problem and not even the most
interesting one. In the end, one should not trust other users on-line in the
same way that one should not trust a stranger met in the street.

There is
another aspect that deserves attention, as it is a marked signal of our times, which
is the blurring of the boundaries between
the off-line and on-line world. In this respect, the case of Jessi Slaughter is
even more significant as it concerns young people and their access to and use
of the Internet; a generation of individuals born with Facebook, Youtube and Skype,
who do not perceive any difference between off- and on-line. To this
generation, not only are privacy and anonymity not values, but life is lived without
distinction between public and private sphere, off- and on-line activity. Neither
the former nor the latter are distinct, but a new life, which is never
completely off-line and never exclusively on-line develops.

Cyber pranks
and on-line trust show that there is a new way of living, made possible by computer
mediated communication (CMC) and ICTs, with which the younger generations are
extremely familiar. This new life is lived in a new environment, where the
world ‘out-there’ is not more real the virtual world, and the actions performed
in the latter have consequences in the former and vice-versa.

In conclusion,
this is a new scenario on which philosophers and ethicists should focus their
attention, to understand its rules and provide principles and guidelines for safer
and better living practices within it.

Cyber-war – the rhetoric of a disruptive and non-destructive warfare

Mariarosaria Taddeo

BBC news (http://news.bbc.co.uk/1/hi/technology/8511711.stm) reported yesterday that the US Senate is about to appoint Lt General Keith Alexander as head of the U.S. Cyber Command (http://en.wikipedia.org/wiki/United_States_Cyber_Command). This is a United States armed forces’ sub-unified command. The USCybercom, as it is abbreviated, manages USA cyber-warfare.
The existence of this command and the military career of the man who leads it prove one more time the importance that cyber-warfare is gaining in the contemporary political and military strategies.

Continue reading

Authors

Affiliations