Information Ethics

Computer vision and emotional privacy

A study published last week (and summarized here and here) demonstrated that a computer could be trained to detect real versus faked facial expressions of pain significantly better than humans. Participants were shown video clips of the faces of people actually in pain (elicited by submerging their arms in icy water) and clips of people simulating pain (with their arms in warm water). The participants had to indicate for each clip whether the expression of pain was genuine or faked.

Whilst human observers could not discriminate real expressions of pain from faked expression better than chance, a computer vision system that automatically measured facial movements and performed pattern recognition on those movements attained 85% accuracy. Even when the human participants practiced, accuracy only increased to 55%.

The authors explain that the system could also be trained to recognize other potentially deceptive actions involving a facial component. They say:

In addition to detecting pain malingering, our computer vision approach maybe used to detect other real-world deceptive actions in the realm of homeland security, psychopathology, job screening, medicine, and law. Like pain, these scenarios also generate strong emotions, along with attempts to minimize, mask, and fake such emotions, which may involve dual control of the face. In addition, our computer vision system can be applied to detect states in which the human face may provide important clues about health, physiology, emotion, or thought, such as drivers’ expressions of sleepiness and students’ expressions of attention and comprehension of lectures, or to track response to treatment of affective disorders.

The possibility of using this technology to detect when someone’s emotional expressions are genuine or not raises interesting ethical questions. I will outline and give preliminary comments on a few of the issues: Continue reading

Private Lives, Dying Wishes, and Technological Development

Recently in Portsmouth, a statue of Charles Dickens has been unveiled. While not terribly notable in itself this event is of some interest as it ignores the last wishes of the author it is meant to honour [1].

The problem, in my view, is that this is just one of many cases in which a public figure—authors appear especially vulnerable—has been denied the fulfilment of his or her express wishes regarding post-mortem handling of his or her estate or image. Continue reading

How to get positive surveillance – a few ideas

I recently published an article on the possible upsides of mass surveillance (somewhat in the vein of David Brin’s “transparent society”). To nobody’s great astonishment, it has attracted criticism! Some of them accuse me of not knowing the negative aspects of surveillance. But that was not the article’s point; there is already a lot written on the negative aspects (Bruce Schneier and Cory Doctorow, for instance, have covered this extremely well). Others make the point that though these benefits may be conceivable in principle, I haven’t shown how they could be obtained in practice.

Again, that wasn’t the point of the article. But it’s a fair criticism – what can we do today to make a better surveillance outcomes more likely? Since I didn’t have space to go through that in my article, here are a few suggestions: Continue reading

Burma, Myanmar and the Myth of Objectivity

by David Edmonds – twitter @DavidEdmonds100

Since my last blog post, there has been a decision within the BBC “to start to move” to calling ‘Burma, ‘Myanmar’.

Burma has always been an interest of mine because it was the big story in the first few weeks when I began in journalism.  Aung San Suu Kyi’s husband (now deceased) lived in Oxford and when the demonstrations broke out in Burma in September 1988 I would deliver news wires to him: in those pre-internet days he had virtually no other means of finding out what was going on. Continue reading

Censorship, pornography and divine swan-on-human action

The Prime Minister has declared that Internet service providers should by default block access to pornography, and that some “horrific” internet search terms to be “blacklisted” on the major search engines, not bringing up any search results. The main motivation of the speech appears to be that access to pornography is “corroding childhood” by having children inadvertently seeing images or visiting websites their parents do not want them to see. There is no shortage of critics, both anti-censorship groups, anti surveillance groupstechnology groups and people concerned with actual harm-reduction. There are two central problems: defining pornography, and finding its harms. Continue reading

How do you want to die?

How do you want to die? Quickly, painlessly, peacefully lying in your own bed?

Most people say that. But then, people seem to cling to their lives, even if that could mean a less peaceful end. When asked whether they would want physicians to perform certain interventions to prolong their lives like CPR (cardiopulmonary resuscitation) or mechanical ventilation (‘breathing machine’), people say ‘yes’.

Interestingly, a study discussed in a Radiolab podcast from earlier this year reveals that contrary to lay people, physicians do not want these life-saving interventions they perform on their patients performed on themselves. Continue reading

Caught in the genetic social network

Direct to consumer genetic testing is growing rapidly; 23andMe has hired Andy Page to help the company scale – especially since it aims at having one million members by the end at the year (currently, since its launch, 23andMe has tested over 180,000 people around the world). While most ethics discussion about personal genomics has focused on the impact on individuals (is the risk of misunderstanding or bad news so bad that people need to be forced to go via medical gatekeepers or genetics counsellors? is there a risk of ‘genomization’ of everyday health? and so on), the sheer number of tested people and their ability to compare results can result in interesting new ethical problems, as a friend found out.

Continue reading

Why it matters whether you believe in free will

by Rebecca Roache

Follow Rebecca on Twitter

Scientific discoveries about how our behaviour is causally influenced often prompt the question of whether we have free will (for a general discussion, see here). This month, for example, the psychologist and criminologist Adrian Raine has been promoting his new book, The Anatomy of Violence, in which he argues that there are neuroscientific explanations of the behaviour of violent criminals. He argues that these explanations might be taken into account during sentencing, since they show that such criminals cannot control their violent behaviour to the same extent that (relatively) non-violent people can, and therefore that these criminals have reduced moral responsibility for their crimes. Our criminal justice system, along with our conceptions of praise and blame, and moral responsibility more generally, all presuppose that we have free will. If science can reveal it to be an illusion, some of the most fundamental features of our society are undermined.

The questions of exactly what free will is, and whether and how it can accommodate scientific discoveries about the causes of our behaviour, are primarily theoretical philosophical questions. Questions of theoretical philosophy—for example, those relating to metaphysics, epistemology, and philosophy of mind and language—are rarely viewed as highly relevant to people’s day-to-day lives (unlike questions of practical philosophy, such as those relating to ethics and morality). However, it turns out that the beliefs that people hold about free will are relevant. In the last five years, empirical evidence has linked reduced belief in free will with an increased willingness to cheat,1 increased aggression and reduced helpfulness,2 and reduced job performance.3 Even the way that the brain prepares for action differs depending on whether or not one believes in free will.4 If the results of these studies apply at a societal level, we should be very concerned about promoting the view that we do not have free will. But what can we do about it? Continue reading

A reply to ‘Facebook: You are your ‘Likes”

Yesterday, Charles Foster discussed the recent study showing that Facebook ‘Likes’ can be plugged into an algorithm to predict things about people – things about their demographics, their habits and their personalities – that they didn’t explicitly disclose. Charles argued that, even though the individual ‘Likes’ were voluntarily published, to use an algorithm to generate further predictions would be unethical on the grounds that individuals have not consented to it and, consequently, that to go ahead and do it anyway is a violation of their privacy.

I wish to make three points contesting his strong conclusion, instead offering a more qualified position: simply running the algorithm on publically available ‘Likes’ data is not unethical, even if no consent has been given. Doing particular things based on the output of the algorithm, however, might be. Continue reading

Facebook: You are your ‘Likes’

By Charles Foster

When you click ‘Like’ on Facebook, you’re giving away a lot more than you might think. Your ‘Likes’ can be assembled by an algorithm into a terrifyingly accurate portrait.

Here are the chances of an accurate prediction: Single v in a relationship: 67%; Parents still together when you were 21: 60%; Cigarette smoking: 73%; Alcohol drinking: 70%; Drug-using: 65%; Caucasian v African American: 95%; Christianity v Islam: 82%; Democrat v Republican: 85%; Male homosexuality: 88%; Female homosexuality: 75%; Gender: 93%. Continue reading