A study published last week (and summarized here and here) demonstrated that a computer could be trained to detect real versus faked facial expressions of pain significantly better than humans. Participants were shown video clips of the faces of people actually in pain (elicited by submerging their arms in icy water) and clips of people simulating pain (with their arms in warm water). The participants had to indicate for each clip whether the expression of pain was genuine or faked.
Whilst human observers could not discriminate real expressions of pain from faked expression better than chance, a computer vision system that automatically measured facial movements and performed pattern recognition on those movements attained 85% accuracy. Even when the human participants practiced, accuracy only increased to 55%.
The authors explain that the system could also be trained to recognize other potentially deceptive actions involving a facial component. They say:
In addition to detecting pain malingering, our computer vision approach maybe used to detect other real-world deceptive actions in the realm of homeland security, psychopathology, job screening, medicine, and law. Like pain, these scenarios also generate strong emotions, along with attempts to minimize, mask, and fake such emotions, which may involve dual control of the face. In addition, our computer vision system can be applied to detect states in which the human face may provide important clues about health, physiology, emotion, or thought, such as drivers’ expressions of sleepiness and students’ expressions of attention and comprehension of lectures, or to track response to treatment of affective disorders.
The possibility of using this technology to detect when someone’s emotional expressions are genuine or not raises interesting ethical questions. I will outline and give preliminary comments on a few of the issues: Continue reading
When I last blogged about the surveillance scandal in June, I argued that the core problem was the reasonable doubts we have about whether the oversight is functioning properly, and that the secrecy makes these doubts worse. Since then a long list of new revelations have arrived. To me, what matters is not so much whether foreign agencies get secretly paid to spy, doubts about internal procedures or how deeply software can peer into human lives, but how these revelations put a lie to many earlier denials. In an essay well worth reading Bruce Schneier points out that this pattern of deception severely undermines our trust in the authorities, and this is an important social risk: democracies and market economies require us to trust politicians and companies to an appropriate extent.
We all know that we are under CCTV surveillance on many occasions each day, particularly when we are in public places. For the most part we accept that being – or potentially being – watched in public places is a reasonable price to pay for the security that 24-hour surveillance offers. However, we also have expectations about what is done with CCTV footage, when, and by whom. A recent discussion with a friend threw up some interesting questions about the nature of these expectations and their reasonableness.
My friend works in a bar where, unsurprisingly, there are several CCTV cameras. Everyone knows where these cameras are and that they are permanently in operation – there is not supposed to be any secrecy. Whilst the primary purpose of the cameras is to promote security, a member of the management team has begun to use them in a way that could be seen as ethically problematic: she logs on to view the footage in real-time, remotely, at her home. In addition to watching the footage, the manager has also addressed points of staff discipline based on what she sees. Perhaps particularly troubling is that she has commented on the way a member of staff behaved when no one was around – when the member of staff thought that she was ‘alone’. Continue reading
New York City contemplates using aerial drones for surveillance purposes, while North Korea buys thousands of cameras to spy on its impoverished population. Britain has so many cameras they cease being newsworthy. The stories multiply – it is trivial to note we are moving towards a surveillance society.
In an earlier post, I suggested surrendering on surveillance might be the least bad option – of all likely civil liberty encroachments, this seemed the less damaging and hardest to resist. But that’s an overly defensive way of phrasing it – if ubiquitous surveillance and lack of privacy are the trends of the future, we shouldn’t just begrudgingly accept them, but demand that society gets the most possible out of them. In this post, I’m not going to suggest how to achieve enlightened surveillance (a 360 degree surveillance would be a small start, for instance), but just outline some of the positive good we could get from it. We all know the negatives; but what good could come from corporations, governments and neighbours being able to peer continually into your bedroom (and efficiently process that data)? In the ideal case, how could we make it work for us? Continue reading
On July 1 professor Steve Mann from University of Toronto got into an altercation at a Paris McDonald’s, apparently because employees objected to his camera glasses. McDonald’s denies any wrongdoing, while professor Mann has posted his account online – complete with footage from his glasses. The event has caused a great deal of interest, with some calling it the world’s first cybernetic hate crime. Exactly what happened and why is unclear and does not concern this post. Whether it was a cybernetic hate crime, rules-obsessed employees or a clash of personality and culture is fairly irrelevant. What is interesting is the ethics of documenting one’s environment, and how to deal with disparities in documentary power.
Cory Doctorow makes a simple but important point in the Guardian: censorship today is inseparable from surveillance. In modern media preventing people from seeing proscribed information requires systems that monitor their activity. To implement copyright-protecting censorship in the UK systems must be in place to track where people seek to access and compare it to a denial list, in whatever medium is used.
The smith was working hard on making a new tool. A passer-by looked at his work and remarked that it looked sharp and dangerous. The smith nodded: it needed to be very sharp to do its work. The visitor wondered why there was no cross-guard to prevent the user’s hand to slide onto the blade, and why the design made it easy to accidentally grip the blade instead of the grip. The smith explained that the tool was intended for people who said they knew how to use it well. “But what if they were overconfident, sold it to somebody else, or had a bad day? Surely some safety measures would be useful?” “No”, said the smith, “my customers did not ask for them. I could make them with a slight effort, but why bother?”
Would we say the smith was doing his job in an ethical manner?
Here are two other pieces of news: Oxford City Council has decided to make it mandatory for taxicabs in Oxford to have CCTV cameras and microphones recording conversations of the passengers. As expected, many people are outraged. The stated reason is to improve public safety, although the data supporting this decision doesn’t seem to be available. The surveillance footage will supposedly not be made available other than as evidence for crimes, and not stored for more than 28 days. Meanwhile in the US, there are hearings about the Stop Online Piracy Act (SOPA) and the PROTECT IP Act, laws intended to make it easier to block copyright infringement and counterfeiting. Besides concerns that critics and industries most affected by the laws are not getting access to the hearings, a serious set of concerns is that they would make it easy to censor websites and block business on fairly loose grounds, with few safeguards against false accusations (something that occurs regularly), little oversight, few remedies for the website, plus the fact that a domestic US law would apply internationally due to the peculiarities of the Internet and US legal definitions.