Follow Rebecca on Twitter
Scientific discoveries about how our behaviour is causally influenced often prompt the question of whether we have free will (for a general discussion, see here). This month, for example, the psychologist and criminologist Adrian Raine has been promoting his new book, The Anatomy of Violence, in which he argues that there are neuroscientific explanations of the behaviour of violent criminals. He argues that these explanations might be taken into account during sentencing, since they show that such criminals cannot control their violent behaviour to the same extent that (relatively) non-violent people can, and therefore that these criminals have reduced moral responsibility for their crimes. Our criminal justice system, along with our conceptions of praise and blame, and moral responsibility more generally, all presuppose that we have free will. If science can reveal it to be an illusion, some of the most fundamental features of our society are undermined.
The questions of exactly what free will is, and whether and how it can accommodate scientific discoveries about the causes of our behaviour, are primarily theoretical philosophical questions. Questions of theoretical philosophy—for example, those relating to metaphysics, epistemology, and philosophy of mind and language—are rarely viewed as highly relevant to people’s day-to-day lives (unlike questions of practical philosophy, such as those relating to ethics and morality). However, it turns out that the beliefs that people hold about free will are relevant. In the last five years, empirical evidence has linked reduced belief in free will with an increased willingness to cheat,1 increased aggression and reduced helpfulness,2 and reduced job performance.3 Even the way that the brain prepares for action differs depending on whether or not one believes in free will.4 If the results of these studies apply at a societal level, we should be very concerned about promoting the view that we do not have free will. But what can we do about it? Continue reading
Yesterday, Charles Foster discussed the recent study showing that Facebook ‘Likes’ can be plugged into an algorithm to predict things about people – things about their demographics, their habits and their personalities – that they didn’t explicitly disclose. Charles argued that, even though the individual ‘Likes’ were voluntarily published, to use an algorithm to generate further predictions would be unethical on the grounds that individuals have not consented to it and, consequently, that to go ahead and do it anyway is a violation of their privacy.
I wish to make three points contesting his strong conclusion, instead offering a more qualified position: simply running the algorithm on publically available ‘Likes’ data is not unethical, even if no consent has been given. Doing particular things based on the output of the algorithm, however, might be. Continue reading
By Charles Foster
When you click ‘Like’ on Facebook, you’re giving away a lot more than you might think. Your ‘Likes’ can be assembled by an algorithm into a terrifyingly accurate portrait.
Here are the chances of an accurate prediction: Single v in a relationship: 67%; Parents still together when you were 21: 60%; Cigarette smoking: 73%; Alcohol drinking: 70%; Drug-using: 65%; Caucasian v African American: 95%; Christianity v Islam: 82%; Democrat v Republican: 85%; Male homosexuality: 88%; Female homosexuality: 75%; Gender: 93%. Continue reading
Sabrina Stewart is a student at Dartmouth College who is visiting the Uehiro Centre this term.
Newspaper health sections yield many headlines and subsequent articles that do not accurately reflect the research publication that is being reported. One article, “Boozing after a heart attack could help you live longer, research reveals” discusses the finding that drinking after a heart attack is beneficial. The headline is at best misleading, and at worse deceptive: the article fails to report the specific frequency of consumption required to derive the stated benefits, the fact that the benefits would depend on the severity of the myocardial infarction, and that any benefit would be lost by intermittent binge drinking. The publication was significant as it was a large-scale study that complemented previous findings, and could therefore be expected to have an effect on people’s health decisions.
This article was taken from the Metro, a free newspaper distributed in London and the South-East of England targeted at commuters. The self-reported estimated readership is just under two million people. If this figure is accurate, The Metro has the third largest newspaper audience in the United Kingdom, after the Sun and the Daily Mail. This capacity to influence such a significant audience comes with responsibility.
There are various Codes of Practice governing the actions of researchers and doctors to ensure unbiased and truthful information is provided to patients and clinical trial participants in order to obtain informed consent. Why is health reporting not subject to the same strict regulation when it carries similar implications for shaping people’s choices regarding their well-being?
Andrew Hessel, Marc Goodman and Steven Kotler sketches in an article in The Atlantic a not-too-far future when the combination of cheap bioengineering, synthetic biology and crowdsourcing of problem solving allows not just personalised medicine, but also personalised biowarfare. They dramatize it by showing how this could be used to attack the US president, but that is mostly for effect: this kind of technology could in principle be targeted at anyone or any group as long as there existed someone who had a reason to use it and the resources to pay for it. The Secret Service looks like it is aware of the problem and does its best to swipe away traces of the President, but it is hard to imagine this to be perfect, doable for old DNA left behind years ago, or applied by all potential targets. In fact, it looks like the US government is keen on collecting not just biometric data, but DNA from foreign potentates. They might be friends right now, but who knows in ten years…
The gene for internet addiction has been found! Well, actually it turns out that 27% of internet addicts have the genetic variant, compared to 17% of non-addicts. The Encode project has overturned the theory of ‘junk DNA‘! Well, actually we already knew that that DNA was doing things long before, and the definition of ‘function’ used is iffy. Alzheimer’s disease is a new ‘type 3 diabetes‘! Except that no diabetes researchers believe it. Sensationalist reporting of science is everywhere, distorting public understanding of what science has discovered and its relative importance. If media ought to try to give a full picture of the situation, they seem to be failing.
But before we start blaming science journalists, maybe we should look sharply at the scientists. A new study shows that 47% of press releases about controlled trials contained spin, emphasizing the beneficial effect of the experimental treatment. This carried over to subsequent news stories, often copying the original spin. Maybe we could try blaming university press officers, but the study found spin in 41% of the abstracts of the papers too, typically overestimating the benefit of the intervention or downplaying risks. The only way of actually finding out the real story is to read the content of the paper, something requiring a bit of skill – and quite often paying for access.
Who to blame, and what to do about it?
“Legitimate rape,” moral consistency, and degrees of sexual harm
Should abortions be allowed in the case of rape? Republican Todd Akin—running for the U.S. Senate from the state of Missouri—thinks not. His reasoning is as follows:
From what I understand from doctors, [pregnancy resulting from rape is] really rare. If it’s a legitimate rape, the female body has ways to try to shut that whole thing down. But let’s assume that maybe that didn’t work or something. I think there should be some punishment. But the punishment ought to be of the rapist, and not attacking the child.
There appears to be no scientific basis for the claim that the trauma of forced intercourse can interrupt ovulation or in any other way prevent a pregnancy; indeed pregnancy is just as likely after rape as after consensual sex, according to the evidence I have seen. This news article sums up the relevant data - though please note that one of my readers [see comments] takes issue with the standard interpretation of the most frequently-cited studies.
Let’s start, for now, then, with a bit of data that is not in question: thousands of pregnancies per year, in the U.S. alone, ensue from cases of reported rape or incest–either through the caveat of Akin’s theory that “maybe [the body's defenses] didn’t work or something” or through the medically orthodox explanation that the body has no such defense. Assuming that falsely reporting rape is relatively rare, as seems to be the case; and acknowledging that many rapes are never reported in the first place, we should be able to agree that pregnancies resulting from rape are a life-changing reality for thousands of women on an annual basis. By “rape” I mean any penetrative act done without clear consent; and here I’m calling attention to the sub-set of such acts that result in conception. I won’t say much about the term “legitimate” — which I find troubling in a hundred ways — simply because other writers have gone to town on it, and I want to say something new.
Now, given everything I’ve just said, what could be going on with Todd Akin’s moral reasoning for him to casually downplay the relevance of rape and incest to the abortion debate while maintaining, as he does, that there should be no exceptions to anti-abortionism even in those cases? Psychologist Brittany Liu uses the notion of “moral coherence” to provide an explanation:
Alastair Croll has written a thought-provoking article, Big data is our generation’s civil rights issue, and we don’t know it. His basic argument is that the new economics of collecting and analyzing data has led to a change in how it is used. Once it was expensive to collect, so only data needed to answer particular questions was collected. Today it is cheap to collect, so it can be collected first and then analyzed – “we collect first and ask questions later”. This means that the questions asked can be very different from the questions the data seem to be about, and in many cases they can be problematic. Race, sexual orientation, health or political views – important for civil rights – can be inferred from apparently innocuous information provided for other purposes – names, soundtracks, word usage, purchases, and search queries.
The problem as he notes is that in order to handle this new situation is that we need to tie link what the data is with how it can be used. And this cannot be done just technologically, but requires societal norms and regulations. What kinds of ethics do we need to safeguard civil rights in a world of big data?
…governments need to balance reliance on data with checks and balances about how this reliance erodes privacy and creates civil and moral issues we haven’t thought through. It’s something that most of the electorate isn’t thinking about, and yet it affects every purchase they make.
This should be fun.
On July 1 professor Steve Mann from University of Toronto got into an altercation at a Paris McDonald’s, apparently because employees objected to his camera glasses. McDonald’s denies any wrongdoing, while professor Mann has posted his account online – complete with footage from his glasses. The event has caused a great deal of interest, with some calling it the world’s first cybernetic hate crime. Exactly what happened and why is unclear and does not concern this post. Whether it was a cybernetic hate crime, rules-obsessed employees or a clash of personality and culture is fairly irrelevant. What is interesting is the ethics of documenting one’s environment, and how to deal with disparities in documentary power.
In an interesting recent essay in the Atlantic – ‘Is it Possible to Wage a Just Cyberwar?’ – Patrick Lin, Fritz Allhoff, and Neil Rowe argue that events such as the Stuxnet cyberattack on Iran suggest that the way we fight wars is changing, as well as the rules that govern them. It is indeed easy to see how nations may be tempted to use cyberweapons to attack anonymously, from a distance, and without the usual financial and personnel costs of conventional warfare. (See also Mariarosaria Taddeo’s interesting recent post on this blog.)
Lin et al. raise what they claim to be certain special issues arising from cyberwarfare that are not covered by standard ethics. But I want to suggest that, though cyberwarfare certainly raises ethical issues, they aren’t as novel as Lin et al. claim.
The first topic they discuss is aggression, which according to standard just war theory is the only just cause for war. Since this is usually taken to imply danger to human life, it may be difficult to justify a military response to a cyberattack on, say, a country’s banking system. But even if it were true that aggression usually involved a danger to life, it seems to me clear that it need not. A not uncommon experience when cycling home in Oxford late at night is for groups of drunken youths to shout abuse at one from a passing car. That’s aggression, but there’s no danger to my life or even my well-being (since it doesn’t bother me in the slightest). As Lin et al. say, it may indeed be difficult to distinguish a cyberattack from, say, espionage. But such grey areas are nothing new in warfare, especially since many possibly aggressive actions are in potential violation of treaties which are themselves open to differing interpretations. I see no reason, then, to conclude that traditional military ethics would not see even an unsuccessful attempt by a state to install malicious software in its enemy’s computer system as constituting an act of war deserving an appopriate, possibly military, response.
The next issue raised by Lin et al. is discrimination. Cyberattacks are like biological viruses (as Lin et al. themselves point out) in so far as they are likely to affect non-combatants as well as combatants. But the very mention of biological viruses itself shows that there is nothing new here. Wells in besieged towns have been poisoned by attackers for nearly three millennia.
The authors then move to proportionality. Their understanding of this is non-standard: ‘the idea that it would be wrong to cause more harm in defending against an attack than the harm of the attack in the first place’. Usually, proportionality is understood to regulate the means chosen in the light of the value of the goal achieved. So if your cyberattack has caused me huge damage, but I could respond so as to punish you and prevent your ever launching an attack on me again by inflicting much less damage, it would be disproportionate if I were to inflict on you any damage beyond that point.
But let’s take their conception of proportionality. One issue is that some cyberattacks may ‘go viral’ in ways not intended by those who launch them. But there is nothing novel about unintended consequences, and those who launch such attacks do them in the full knowledge of the risks they are imposing. Likewise, it has often been the case that those who unleash the dogs of war know full well that once released it may well be impossible to restrain them, and those who have been harmed find it hard to work out exactly how significant the harm in question is or may turn out to be.
According to Lin et al., cyberwarfare poses special problems of attribution. Combatants should be identifiable, and often in cyberwarfare they will not be, which makes it harder to avoid harming non-combatants in any response. Again, there is nothing new here. Consider, for example, those many British service personnel who worked undercover in France during the Second World War. They certainly posed some risk to ordinary French citizens who might have been confused with them. The idea of Lin et al. that treaties should be drawn up requiring that cyberattacks carry a digital signature strikes me as about as plausible as the idea that these British service personnel should have been required to wear full uniform at all times.
The authors then ask whether cyberattacks, which require people perhaps to click on some malicious link, might count as ‘perfidy’ within international law. The examples given in the 1977 Protocol added to the 1949 Geneva Convention, under article 37, are the following:
It is prohibited to kill, injure or capture an adversary by resort to perfidy. Acts inviting the confidence of an adversary to lead him to believe that he is entitled to, or is obliged to accord, protection under the rules of international law applicable in armed conflict, with intent to betray that confidence, shall constitute perfidy. The following acts are examples of perfidy:
(a) The feigning of an intent to negotiate under a flag of truce or of a surrender;
(b) The feigning of an incapacitation by wounds or sickness;
(c) The feigning of civilian, non-combatant status; and
(d) The feigning of protected status by the use of signs, emblems or uniforms of the United Nations or of neutral or other States not Parties to the conflict.
Consider for example an infected email sent by some state to its enemy’s military authorities apparently from the Red Cross. Could this not be prohibited under (d), above? I think not, since there is no feigning of protected status. An email apparently from some arms manufacturer, for example, would be equally deceptive.
Finally, Lin et al. focus on reversibility. As they point out, some cyberattacks might be reversible, with use of, say, back-up files or decryption. But, of course, the effects of some existing attacks are reversible. Cities can be rebuilt, or orchards cleared of landmines and replanted.
The technology of cyberwarfare is of course new. But the ethical issues it raises have been discussed for hundreds of years.