Some researchers in the US recently conducted an ‘experiment in the law as algorithm’. (One of the researchers involved with the project was interviewed by Ars Technia, here.) At first glance, this seems like quite a simple undertaking for someone with knowledge of a particular law and mathematical proficiency: laws are clearly defined rules, which can be broken in clearly defined ways. This is most true for strict liability offences, which require no proof of a mental element of the offence (the mens rea). An individual can commit a strict liability offence even if she had no knowledge that her act was criminal and had no intention to commit the crime. All that is required under strict liability statutes is that the act itself (the actus reus) is voluntary. Essentially: if you did it, you’re liable – it doesn’t matter why or how. So, for strict liability offences such as speeding it would seem straightforward enough to create an algorithm that could compare actual driving speed with the legal speed limit, and adjudicate liability accordingly.
This possibility of law as algorithm is what the US researchers aimed to test out with their experiment. They imagined the future possibility of automated law enforcement, especially for simple laws like those governing driving. To conduct their experiment, the researchers assigned a group of 52 programmers the task of automating the enforcement of driving speed limits. A late-model vehicle was equipped with a sensor that collected actual vehicle speed over an hour-long commute. The programmers (without collaboration) each wrote a program that computed the number of speed limit violations and issued mock traffic tickets. Continue reading
Yesterday, Charles Foster discussed the recent study showing that Facebook ‘Likes’ can be plugged into an algorithm to predict things about people – things about their demographics, their habits and their personalities – that they didn’t explicitly disclose. Charles argued that, even though the individual ‘Likes’ were voluntarily published, to use an algorithm to generate further predictions would be unethical on the grounds that individuals have not consented to it and, consequently, that to go ahead and do it anyway is a violation of their privacy.
I wish to make three points contesting his strong conclusion, instead offering a more qualified position: simply running the algorithm on publically available ‘Likes’ data is not unethical, even if no consent has been given. Doing particular things based on the output of the algorithm, however, might be. Continue reading
By Charles Foster
When you click ‘Like’ on Facebook, you’re giving away a lot more than you might think. Your ‘Likes’ can be assembled by an algorithm into a terrifyingly accurate portrait.
Here are the chances of an accurate prediction: Single v in a relationship: 67%; Parents still together when you were 21: 60%; Cigarette smoking: 73%; Alcohol drinking: 70%; Drug-using: 65%; Caucasian v African American: 95%; Christianity v Islam: 82%; Democrat v Republican: 85%; Male homosexuality: 88%; Female homosexuality: 75%; Gender: 93%. Continue reading
Andrew Hessel, Marc Goodman and Steven Kotler sketches in an article in The Atlantic a not-too-far future when the combination of cheap bioengineering, synthetic biology and crowdsourcing of problem solving allows not just personalised medicine, but also personalised biowarfare. They dramatize it by showing how this could be used to attack the US president, but that is mostly for effect: this kind of technology could in principle be targeted at anyone or any group as long as there existed someone who had a reason to use it and the resources to pay for it. The Secret Service looks like it is aware of the problem and does its best to swipe away traces of the President, but it is hard to imagine this to be perfect, doable for old DNA left behind years ago, or applied by all potential targets. In fact, it looks like the US government is keen on collecting not just biometric data, but DNA from foreign potentates. They might be friends right now, but who knows in ten years…
The gene for internet addiction has been found! Well, actually it turns out that 27% of internet addicts have the genetic variant, compared to 17% of non-addicts. The Encode project has overturned the theory of ‘junk DNA‘! Well, actually we already knew that that DNA was doing things long before, and the definition of ‘function’ used is iffy. Alzheimer’s disease is a new ‘type 3 diabetes‘! Except that no diabetes researchers believe it. Sensationalist reporting of science is everywhere, distorting public understanding of what science has discovered and its relative importance. If media ought to try to give a full picture of the situation, they seem to be failing.
But before we start blaming science journalists, maybe we should look sharply at the scientists. A new study shows that 47% of press releases about controlled trials contained spin, emphasizing the beneficial effect of the experimental treatment. This carried over to subsequent news stories, often copying the original spin. Maybe we could try blaming university press officers, but the study found spin in 41% of the abstracts of the papers too, typically overestimating the benefit of the intervention or downplaying risks. The only way of actually finding out the real story is to read the content of the paper, something requiring a bit of skill – and quite often paying for access.
Who to blame, and what to do about it?
Alastair Croll has written a thought-provoking article, Big data is our generation’s civil rights issue, and we don’t know it. His basic argument is that the new economics of collecting and analyzing data has led to a change in how it is used. Once it was expensive to collect, so only data needed to answer particular questions was collected. Today it is cheap to collect, so it can be collected first and then analyzed – “we collect first and ask questions later”. This means that the questions asked can be very different from the questions the data seem to be about, and in many cases they can be problematic. Race, sexual orientation, health or political views – important for civil rights – can be inferred from apparently innocuous information provided for other purposes – names, soundtracks, word usage, purchases, and search queries.
The problem as he notes is that in order to handle this new situation is that we need to tie link what the data is with how it can be used. And this cannot be done just technologically, but requires societal norms and regulations. What kinds of ethics do we need to safeguard civil rights in a world of big data?
…governments need to balance reliance on data with checks and balances about how this reliance erodes privacy and creates civil and moral issues we haven’t thought through. It’s something that most of the electorate isn’t thinking about, and yet it affects every purchase they make.
This should be fun.
On July 1 professor Steve Mann from University of Toronto got into an altercation at a Paris McDonald’s, apparently because employees objected to his camera glasses. McDonald’s denies any wrongdoing, while professor Mann has posted his account online – complete with footage from his glasses. The event has caused a great deal of interest, with some calling it the world’s first cybernetic hate crime. Exactly what happened and why is unclear and does not concern this post. Whether it was a cybernetic hate crime, rules-obsessed employees or a clash of personality and culture is fairly irrelevant. What is interesting is the ethics of documenting one’s environment, and how to deal with disparities in documentary power.
In an interesting recent essay in the Atlantic – ‘Is it Possible to Wage a Just Cyberwar?’ – Patrick Lin, Fritz Allhoff, and Neil Rowe argue that events such as the Stuxnet cyberattack on Iran suggest that the way we fight wars is changing, as well as the rules that govern them. It is indeed easy to see how nations may be tempted to use cyberweapons to attack anonymously, from a distance, and without the usual financial and personnel costs of conventional warfare. (See also Mariarosaria Taddeo’s interesting recent post on this blog.)
Lin et al. raise what they claim to be certain special issues arising from cyberwarfare that are not covered by standard ethics. But I want to suggest that, though cyberwarfare certainly raises ethical issues, they aren’t as novel as Lin et al. claim.
The first topic they discuss is aggression, which according to standard just war theory is the only just cause for war. Since this is usually taken to imply danger to human life, it may be difficult to justify a military response to a cyberattack on, say, a country’s banking system. But even if it were true that aggression usually involved a danger to life, it seems to me clear that it need not. A not uncommon experience when cycling home in Oxford late at night is for groups of drunken youths to shout abuse at one from a passing car. That’s aggression, but there’s no danger to my life or even my well-being (since it doesn’t bother me in the slightest). As Lin et al. say, it may indeed be difficult to distinguish a cyberattack from, say, espionage. But such grey areas are nothing new in warfare, especially since many possibly aggressive actions are in potential violation of treaties which are themselves open to differing interpretations. I see no reason, then, to conclude that traditional military ethics would not see even an unsuccessful attempt by a state to install malicious software in its enemy’s computer system as constituting an act of war deserving an appopriate, possibly military, response.
The next issue raised by Lin et al. is discrimination. Cyberattacks are like biological viruses (as Lin et al. themselves point out) in so far as they are likely to affect non-combatants as well as combatants. But the very mention of biological viruses itself shows that there is nothing new here. Wells in besieged towns have been poisoned by attackers for nearly three millennia.
The authors then move to proportionality. Their understanding of this is non-standard: ‘the idea that it would be wrong to cause more harm in defending against an attack than the harm of the attack in the first place’. Usually, proportionality is understood to regulate the means chosen in the light of the value of the goal achieved. So if your cyberattack has caused me huge damage, but I could respond so as to punish you and prevent your ever launching an attack on me again by inflicting much less damage, it would be disproportionate if I were to inflict on you any damage beyond that point.
But let’s take their conception of proportionality. One issue is that some cyberattacks may ‘go viral’ in ways not intended by those who launch them. But there is nothing novel about unintended consequences, and those who launch such attacks do them in the full knowledge of the risks they are imposing. Likewise, it has often been the case that those who unleash the dogs of war know full well that once released it may well be impossible to restrain them, and those who have been harmed find it hard to work out exactly how significant the harm in question is or may turn out to be.
According to Lin et al., cyberwarfare poses special problems of attribution. Combatants should be identifiable, and often in cyberwarfare they will not be, which makes it harder to avoid harming non-combatants in any response. Again, there is nothing new here. Consider, for example, those many British service personnel who worked undercover in France during the Second World War. They certainly posed some risk to ordinary French citizens who might have been confused with them. The idea of Lin et al. that treaties should be drawn up requiring that cyberattacks carry a digital signature strikes me as about as plausible as the idea that these British service personnel should have been required to wear full uniform at all times.
The authors then ask whether cyberattacks, which require people perhaps to click on some malicious link, might count as ‘perfidy’ within international law. The examples given in the 1977 Protocol added to the 1949 Geneva Convention, under article 37, are the following:
It is prohibited to kill, injure or capture an adversary by resort to perfidy. Acts inviting the confidence of an adversary to lead him to believe that he is entitled to, or is obliged to accord, protection under the rules of international law applicable in armed conflict, with intent to betray that confidence, shall constitute perfidy. The following acts are examples of perfidy:
(a) The feigning of an intent to negotiate under a flag of truce or of a surrender;
(b) The feigning of an incapacitation by wounds or sickness;
(c) The feigning of civilian, non-combatant status; and
(d) The feigning of protected status by the use of signs, emblems or uniforms of the United Nations or of neutral or other States not Parties to the conflict.
Consider for example an infected email sent by some state to its enemy’s military authorities apparently from the Red Cross. Could this not be prohibited under (d), above? I think not, since there is no feigning of protected status. An email apparently from some arms manufacturer, for example, would be equally deceptive.
Finally, Lin et al. focus on reversibility. As they point out, some cyberattacks might be reversible, with use of, say, back-up files or decryption. But, of course, the effects of some existing attacks are reversible. Cities can be rebuilt, or orchards cleared of landmines and replanted.
The technology of cyberwarfare is of course new. But the ethical issues it raises have been discussed for hundreds of years.
Lin’s, Allhoff’s and Rowe’s article in yesterday’s “The Atlantic” could have not been more timely. In the previous week a new cyber weapons has been ‘discovered’, the Flame; the New York Times reported the story behind one of the most famous cyber attacks, i.e. Stuxnet, confirming everyone’s suspicion that both the US and Israel had launched the attack; the NATO Cooperative Cyber Defense Centre of Excellence inaugurates The Fourth International Conference on Cyber Conflict, hosting militaries, policy makers, politicians, experts in law and also an ethicist (myself) to discuss and share ideas, data and ‘unclassified’ material about cyber warfare. It turns out that cyber warfare really is today’s hot topic, or at least this week’s hot topic.
In the article, the authors stress the importance of the ethical implications of cyber warfare and the need for policies and regulations that would guarantee a just cyber warfare, meaning warfare that respects the principles of Just War Theory tradition, as they name them: aggression, discrimination, proportionality, attribution, treacherous deceit, and a long-lasting peace.
The problem is that applying Just War Theory to the case of cyber warfare is not straightforward at all. Somehow, cyber warfare slips through the net of Just War Theory and it proves to be quite difficult to regulate using old, traditional ethical principles.
Just War Theory is an ethical theory concerned with the preservation of and respect for human life and liberty; it is all about limiting casualties and physical damage. It is an ethical theory designed to keep in mind classic warfare and its tangible targets. In the grand scheme of Just War Theory there is no place for informational infrastructures, data and information. In other words, there is no concern for the targets of cyber warfare.
One may then wonder the reason why we should bother with ethics at all, when it comes to warfare that, in most circumstances, is waged using a piece of code against some intangible objects, without directly causing casualties or physical damage.
For once, philosophers and ethicists are in the right place to provide a good answer, reminding themselves, the law and the policy makers that those intangible targets are something upon which individuals and societies of the information age depend. Just consider how much of the GPD of several European countries rests mainly on such intangible goods (Floridi’s article).
If this was not enough, as members of information societies we actually attribute a moral value to informational infrastructures, the data and information that they store. The importance we attribute to online privacy and anonymity provides a good example in this respect. So if there is a war that is targeting precisely such information-related goods, it has to be a fair warfare because, despite being intangible, those goods are worthwhile.
The authors are right in pointing to Just War Theory as the ethical framework to be taken into consideration. As a matter of fact, Just War Theory offers a set of principles for just war that remain valid for any type of warfare that may be waged, be it classic or cyber. It is also a fact, that there is a hiatus between the ontology of the entities involved in traditional warfare and of those involved in cyberwar. This is because Just War Theory rests on an anthropocentric ontology, i.e. it is concerned with respect for human rights and disregards non-human entities as part of the moral discourse, and for this reason it does not provide sufficient means for addressing the case for cyber warfare. It is this hiatus that presents the need for philosophical and ethical work to fill it and provide new grounds to ensure just cyber warfare.
Another issue is to convince both policy and law-makers that the gap in current policies and regulations that everyone is so concerned about can only be bridged taking into account the moral stance of informational objects. This may be just the next task to be accomplished.
Your politics are determined by your values, your opinions about the facts of the world, and, let’s be honest, just a little bit of tribalism. But the future is approaching, as it often does, and great transformations may be in the cards. Transformations that could dramatically affect the facts of the world. So whatever your values are, there is a chance that you may soon be arguing for the opposite of your usual policies. For instance, what if the future were necessarily…
Communist: one of the easiest ones to conceive of. Here it turns out that as barriers to trade are removed and transaction costs go to zero, the natural state of the economy is one of perpetual crashes. Celebrity and fame feed upon themselves: everyone demands the best, and the definition of the best is shared widely: niche markets don’t exist. Incomes follow such a sharp power law that only a few percent of the population have any wealth at all. Automation means that most people can’t earn enough to sustain themselves: their income drops below the costs of keeping them alive. Hence a large, bloated, over-regulating government becomes a matter of survival.
Ultra-capitalist: as barriers to trade are removed and transaction costs go to zero, the whole market segments into small niches. Everyone can find some buyer for their work, as new demands and new suppliers spring up immediately, connected by new technologies. Technology solves known externalities (like global warming), so there is little need for a centralised controlling authority. Change happens so rapidly that any governmental intervention is counterproductive: by the time the change is implemented, the benefits and costs the government was trying to influence are things of the past. The efficient market, the only thing fast enough to keep up with itself, flows like a river around any blundering governmental efforts, rendering them moot.