Skip to content

Web/Tech

Computer vision and emotional privacy

A study published last week (and summarized here and here) demonstrated that a computer could be trained to detect real versus faked facial expressions of pain significantly better than humans. Participants were shown video clips of the faces of people actually in pain (elicited by submerging their arms in icy water) and clips of people simulating pain (with their arms in warm water). The participants had to indicate for each clip whether the expression of pain was genuine or faked.

Whilst human observers could not discriminate real expressions of pain from faked expression better than chance, a computer vision system that automatically measured facial movements and performed pattern recognition on those movements attained 85% accuracy. Even when the human participants practiced, accuracy only increased to 55%.

The authors explain that the system could also be trained to recognize other potentially deceptive actions involving a facial component. They say:

In addition to detecting pain malingering, our computer vision approach maybe used to detect other real-world deceptive actions in the realm of homeland security, psychopathology, job screening, medicine, and law. Like pain, these scenarios also generate strong emotions, along with attempts to minimize, mask, and fake such emotions, which may involve dual control of the face. In addition, our computer vision system can be applied to detect states in which the human face may provide important clues about health, physiology, emotion, or thought, such as drivers’ expressions of sleepiness and students’ expressions of attention and comprehension of lectures, or to track response to treatment of affective disorders.

The possibility of using this technology to detect when someone’s emotional expressions are genuine or not raises interesting ethical questions. I will outline and give preliminary comments on a few of the issues:Read More »Computer vision and emotional privacy

Innovation’s low-hanging fruits: on the demand or supply sides?

Cross-posted at Less Wrong.

This is an addendum to a previous post, which argued that we may be underestimating the impact of innovation because we have so much of it. I noted that we underestimated the innovative aspect of the CD because many other technologies partially overlapped with it, such as television, radio, cinema, ipod, walkman, landline phone, mobile phone, laptop, VCR and Tivo’s. Without these overlapping technologies, we could see the CD’s true potential and estimate it higher as an innovation. Many different technologies could substitute for each other.

But this argument brings out a salient point: if so many innovations overlap or potentially overlap, then there must be many more innovations that purposes for innovations. Tyler Cowen made the interesting point that the internet isn’t as innovative as the flushing toilet (or indeed the television). He certainly has a point here: imagine society without toilets or youtube, which would be most tolerable (or most survivable)?Read More »Innovation’s low-hanging fruits: on the demand or supply sides?

The innovation tree, overshadowed in the innovation forest

Cross-Posted at Less Wrong.

Many have pronounced that the era of innovation dead, peace be to its soul. From Tyler Cowen’s decree that we’ve picked all the low hanging fruit of innovation, through Robert Gordon’s idea that further innovation growth is threatened by “six headwinds”, to Gary Karparov’s and Peter Thiel’s theory that risk aversion has stifled innovation, there is no lack of predictions about the end of discovery.

I don’t propose to address the issue with something as practical and useful as actual data. Instead, staying true to my philosophical environment, I propose a thought experiment that hopefully may shed some light. The core idea is that we might be underestimating the impact of innovation because we have so much of it.

Imagine that technological innovation had for some reason stopped around the 1945 – with one exception: the CD and CD player/burner. Fast forwards a few decades, and visualise society. We can imagine a society completely dominated by the CD. We’d have all the usual uses for the CD – music, songs and similar – of course, but also much more.Read More »The innovation tree, overshadowed in the innovation forest

Beyond 23andMe’s Shutdown: The Role of the FDA in the Future of Direct-to-Consumer Genetic Testing

Kyle Edwards, Uehiro Centre for Practical Ethics and The Ethox Centre, University of Oxford Caroline Huang, The Ethox Centre, University of Oxford An article based on this blog post has now been published in the May – June 2014 Hastings Center Report: http://onlinelibrary.wiley.com/doi/10.1002/hast.310/full. Please check out our more developed thoughts on this topic there!

Secret snakes biting their own tails: secrecy and surveillance

To most people interested in surveillance the latest revelations that the US government has been doing widespread monitoring of its citizens (and the rest of the world), possibly through back-doors into major company services, is merely a chance to smugly say “I told you so“. The technology and legal trends have been clear for a long time. That intelligence agencies share information (allowing them to get around pesky limits on looking at their own citizens) is another yawn.

That does not mean they are unimportant: we are at an important choice-point in regard how to handle mass surveillance. But the battle is not security versus freedom, but secrecy versus openness.

Read More »Secret snakes biting their own tails: secrecy and surveillance

Strict-ish liability? An experiment in the law as algorithm

Some researchers in the US recently conducted an ‘experiment in the law as algorithm’. (One of the researchers involved with the project was interviewed by Ars Technia, here.) At first glance, this seems like quite a simple undertaking for someone with knowledge of a particular law and mathematical proficiency: laws are clearly defined rules, which can be broken in clearly defined ways. This is most true for strict liability offences, which require no proof of a mental element of the offence (the mens rea). An individual can commit a strict liability offence even if she had no knowledge that her act was criminal and had no intention to commit the crime. All that is required under strict liability statutes is that the act itself (the actus reus) is voluntary. Essentially: if you did it, you’re liable – it doesn’t matter why or how. So, for strict liability offences such as speeding it would seem straightforward enough to create an algorithm that could compare actual driving speed with the legal speed limit, and adjudicate liability accordingly.

This possibility of law as algorithm is what the US researchers aimed to test out with their experiment. They imagined the future possibility of automated law enforcement, especially for simple laws like those governing driving. To conduct their experiment, the researchers assigned a group of 52 programmers the task of automating the enforcement of driving speed limits. A late-model vehicle was equipped with a sensor that collected actual vehicle speed over an hour-long commute. The programmers (without collaboration) each wrote a program that computed the number of speed limit violations and issued mock traffic tickets.Read More »Strict-ish liability? An experiment in the law as algorithm

A reply to ‘Facebook: You are your ‘Likes”

Yesterday, Charles Foster discussed the recent study showing that Facebook ‘Likes’ can be plugged into an algorithm to predict things about people – things about their demographics, their habits and their personalities – that they didn’t explicitly disclose. Charles argued that, even though the individual ‘Likes’ were voluntarily published, to use an algorithm to generate further predictions would be unethical on the grounds that individuals have not consented to it and, consequently, that to go ahead and do it anyway is a violation of their privacy.

I wish to make three points contesting his strong conclusion, instead offering a more qualified position: simply running the algorithm on publically available ‘Likes’ data is not unethical, even if no consent has been given. Doing particular things based on the output of the algorithm, however, might be.Read More »A reply to ‘Facebook: You are your ‘Likes”

Facebook: You are your ‘Likes’

By Charles Foster

When you click ‘Like’ on Facebook, you’re giving away a lot more than you might think. Your ‘Likes’ can be assembled by an algorithm into a terrifyingly accurate portrait.

Here are the chances of an accurate prediction: Single v in a relationship: 67%; Parents still together when you were 21: 60%; Cigarette smoking: 73%; Alcohol drinking: 70%; Drug-using: 65%; Caucasian v African American: 95%; Christianity v Islam: 82%; Democrat v Republican: 85%; Male homosexuality: 88%; Female homosexuality: 75%; Gender: 93%.Read More »Facebook: You are your ‘Likes’

Personalised weapons of mass destruction: governments and strategic emerging technologies

Andrew Hessel, Marc Goodman and Steven Kotler sketches in an article in The Atlantic a not-too-far future when the combination of cheap bioengineering, synthetic biology and crowdsourcing of problem solving allows not just personalised medicine, but also personalised biowarfare. They dramatize it by showing how this could be used to attack the US president, but that is mostly for effect: this kind of technology could in principle be targeted at anyone or any group as long as there existed someone who had a reason to use it and the resources to pay for it. The Secret Service looks like it is aware of the problem and does its best to swipe away traces of the President, but it is hard to imagine this to be perfect, doable for old DNA left behind years ago, or applied by all potential targets. In fact, it looks like the US government is keen on collecting not just biometric data, but DNA from foreign potentates. They might be friends right now, but who knows in ten years…

Read More »Personalised weapons of mass destruction: governments and strategic emerging technologies

Spin city: why improving collective epistemology matters

The gene for internet addiction has been found! Well, actually it turns out that 27% of internet addicts have the genetic variant, compared to 17% of non-addicts. The Encode project has overturned the theory of ‘junk DNA‘! Well, actually we already knew that that DNA was doing things long before, and the definition of ‘function’ used is iffy. Alzheimer’s disease is a new ‘type 3 diabetes‘! Except that no diabetes researchers believe it. Sensationalist reporting of science is everywhere, distorting public understanding of what science has discovered and its relative importance. If media ought to try to give a full picture of the situation, they seem to be failing.

But before we start blaming science journalists, maybe we should look sharply at the scientists. A new study shows that 47% of press releases about controlled trials contained spin, emphasizing the beneficial effect of the experimental treatment. This carried over to subsequent news stories, often copying the original spin. Maybe we could try blaming university press officers, but the study found spin in 41% of the abstracts of the papers too, typically overestimating the benefit of the intervention or downplaying risks. The only way of actually finding out the real story is to read the content of the paper, something requiring a bit of skill – and quite often paying for access.

Who to blame, and what to do about it?

Read More »Spin city: why improving collective epistemology matters