surveillance

Computer vision and emotional privacy

A study published last week (and summarized here and here) demonstrated that a computer could be trained to detect real versus faked facial expressions of pain significantly better than humans. Participants were shown video clips of the faces of people actually in pain (elicited by submerging their arms in icy water) and clips of people simulating pain (with their arms in warm water). The participants had to indicate for each clip whether the expression of pain was genuine or faked.

Whilst human observers could not discriminate real expressions of pain from faked expression better than chance, a computer vision system that automatically measured facial movements and performed pattern recognition on those movements attained 85% accuracy. Even when the human participants practiced, accuracy only increased to 55%.

The authors explain that the system could also be trained to recognize other potentially deceptive actions involving a facial component. They say:

In addition to detecting pain malingering, our computer vision approach maybe used to detect other real-world deceptive actions in the realm of homeland security, psychopathology, job screening, medicine, and law. Like pain, these scenarios also generate strong emotions, along with attempts to minimize, mask, and fake such emotions, which may involve dual control of the face. In addition, our computer vision system can be applied to detect states in which the human face may provide important clues about health, physiology, emotion, or thought, such as drivers’ expressions of sleepiness and students’ expressions of attention and comprehension of lectures, or to track response to treatment of affective disorders.

The possibility of using this technology to detect when someone’s emotional expressions are genuine or not raises interesting ethical questions. I will outline and give preliminary comments on a few of the issues: Continue reading

How to get positive surveillance – a few ideas

I recently published an article on the possible upsides of mass surveillance (somewhat in the vein of David Brin’s “transparent society”). To nobody’s great astonishment, it has attracted criticism! Some of them accuse me of not knowing the negative aspects of surveillance. But that was not the article’s point; there is already a lot written on the negative aspects (Bruce Schneier and Cory Doctorow, for instance, have covered this extremely well). Others make the point that though these benefits may be conceivable in principle, I haven’t shown how they could be obtained in practice.

Again, that wasn’t the point of the article. But it’s a fair criticism – what can we do today to make a better surveillance outcomes more likely? Since I didn’t have space to go through that in my article, here are a few suggestions: Continue reading

Lying in the least untruthful manner: surveillance and trust

When I last blogged about the surveillance scandal in June, I argued that the core problem was the reasonable doubts we have about whether the oversight is functioning properly, and that the secrecy makes these doubts worse.  Since then a long list of new revelations have arrived. To me, what matters is not so much whether foreign agencies get secretly paid to spy, doubts about internal procedures or how deeply software can peer into human lives, but how these revelations put a lie to many earlier denials. In an essay well worth reading Bruce Schneier points out that this pattern of deception severely undermines our trust in the authorities, and this is an important social risk: democracies and market economies require us to trust politicians and companies to an appropriate extent.

Continue reading

How to deal with double-edged technology

By Brian D. Earp

 World’s smallest drone? Or how to deal with double-edged technology 

BBC News reports that Harvard scientists have developed the world’s smallest flying robot. It’s about the size of a penny, and it moves faster than a human hand can swat. Of course, the inventors of this “diminutive flying vehicle” immediately lauded its potential for bringing good to the world:

1. “We could envision these robots being used for search-and-rescue operations to search for human survivors under collapsed buildings or [in] other hazardous environments.”

2. “They [could] be used for environmental monitoring, to be dispersed into a habitat to sense trace chemicals or other factors.”

3. They might even behave like many real insects and assist with the pollination of crops, “to function as the now-struggling honeybee populations do in supporting agriculture around the world.”

These all seem like pretty commendable uses of a new technology. Yet one can think of some “bad” uses too. The “search and rescue” version of this robot (for example) would presumably be fitted with a camera; and the prospect of a swarm of tiny, remote-controlled flying video recorders raises some obvious questions about spying and privacy. It also prompts one to wonder who will have access to these spy bugs (the U.S. Air Force has long been interested in building miniature espionage drones), and whether there will be effective regulatory strategies capable of tilting future usage more toward the search-and-rescue side of things, and away from the peep-and-record side.

Continue reading

On being private in public

We all know that we are under CCTV surveillance on many occasions each day, particularly when we are in public places. For the most part we accept that being – or potentially being – watched in public places is a reasonable price to pay for the security that 24-hour surveillance offers. However, we also have expectations about what is done with CCTV footage, when, and by whom. A recent discussion with a friend threw up some interesting questions about the nature of these expectations and their reasonableness.

My friend works in a bar where, unsurprisingly, there are several CCTV cameras. Everyone knows where these cameras are and that they are permanently in operation – there is not supposed to be any secrecy. Whilst the primary purpose of the cameras is to promote security, a member of the management team has begun to use them in a way that could be seen as ethically problematic: she logs on to view the footage in real-time, remotely, at her home. In addition to watching the footage, the manager has also addressed points of staff discipline based on what she sees. Perhaps particularly troubling is that she has commented on the way a member of staff behaved when no one was around – when the member of staff thought that she was ‘alone’. Continue reading

Enlightened surveillance?

New York City contemplates using aerial drones for surveillance purposes, while North Korea buys thousands of cameras to spy on its impoverished population. Britain has so many cameras they cease being newsworthy. The stories multiply – it is trivial to note we are moving towards a surveillance society.

In an earlier post, I suggested surrendering on surveillance might be the least bad option – of all likely civil liberty encroachments,  this seemed the less damaging and hardest to resist. But that’s an overly defensive way of phrasing it – if ubiquitous surveillance and lack of privacy are the trends of the future, we shouldn’t just begrudgingly accept them, but demand that society gets the most possible out of them. In this post, I’m not going to suggest how to achieve enlightened surveillance (a 360 degree surveillance would be a small start, for instance), but just outline some of the positive good we could get from it. We all know the negatives; but what good could come from corporations, governments and neighbours being able to peer continually into your bedroom (and efficiently process that data)? In the ideal case, how could we make it work for us? Continue reading

With great documentary power comes great responsibility

On July 1 professor Steve Mann from University of Toronto got into an altercation at a Paris McDonald’s, apparently because employees objected to his camera glasses. McDonald’s denies any wrongdoing, while professor Mann has posted his account online – complete with footage from his glasses. The event has caused a great deal of interest, with some calling it the world’s first cybernetic hate crime. Exactly what happened and why is unclear and does not concern this post. Whether it was a cybernetic hate crime, rules-obsessed employees or a clash of personality and culture is fairly irrelevant. What is interesting is the ethics of documenting one’s environment, and how to deal with disparities in documentary power.

Continue reading

The censor and the eavesdropper: the link between censorship and surveillance

Cory Doctorow makes a simple but important point in the Guardian: censorship today is inseparable from surveillance. In modern media preventing people from seeing proscribed information requires systems that monitor their activity. To implement copyright-protecting censorship in the UK systems must be in place to track where people seek to access and compare it to a denial list, in whatever medium is used.

Continue reading

Surrendering to big brother might be the least bad option

We’re probably approaching a point where blue-collar crime could be eradicated, one way or the other. But the way does matter: we could eradicate crime through ubiquitous surveillance, or through drug treatments/targeted lobotomies to remove the urges to criminality, or through effective early identification of potential criminals and preemptive measures against them, or through skilled large scale social manipulation of attitudes, or even through reducing all human interactions to tele-presence.

All these methods are unpleasant and undermine our current notions of democracy, but persistent fear of crime (despite the persistent reduction in actual crime) means that politicians will find it extraordinarily difficult not to implement one of these measures, were it to work. Humanity will likely find itself in a crime-free society; the question is how.

To my mind, ubiquitous surveillance is the least unpleasant of the possibilities – it’s non-discriminatory, doesn’t interfere with people’s inner motivations, doesn’t involve sinister manipulations of social norms or loss of human interactions. Assuming we can’t hold the line, that’s where I would want it to be broken.

But we might have more influence if we surrender early. Saying “we’ll allow surveillance, but fight you tooth and nail and claw on the other methods” would make it much easier to ensure those other methods were not implemented. In exchange for cooperation, we could also push the surveillance state into more positive implementations of the policy – maybe achieving 360 degree transparency (we watch the rulers watching us) or treating recording akin to electronic medical records, only allowing them to viewed in specific circumstances.

Cabs, censorship and cutting tools

The smith was working hard on making a new tool. A passer-by looked at his work and remarked that it looked sharp and dangerous. The smith nodded: it needed to be very sharp to do its work. The visitor wondered why there was no cross-guard to prevent the user’s hand to slide onto the blade, and why the design made it easy to accidentally grip the blade instead of the grip. The smith explained that the tool was intended for people who said they knew how to use it well. “But what if they were overconfident, sold it to somebody else, or had a bad day? Surely some safety measures would be useful?” “No”, said the smith, “my customers did not ask for them. I could make them with a slight effort, but why bother?”

Would we say the smith was doing his job in an ethical manner?

Here are two other pieces of news: Oxford City Council has decided to make it mandatory for taxicabs in Oxford to have CCTV cameras and microphones recording conversations of the passengers. As expected, many people are outraged. The stated reason is to improve public safety, although the data supporting this decision doesn’t seem to be available. The surveillance footage will supposedly not be made available other than as evidence for crimes, and not stored for more than 28 days. Meanwhile in the US, there are hearings about the Stop Online Piracy Act (SOPA) and the PROTECT IP Act, laws intended to make it easier to block copyright infringement and counterfeiting. Besides concerns that critics and industries most affected by the laws are not getting access to the hearings, a serious set of concerns is that they would make it easy to censor websites and block business on fairly loose grounds, with few safeguards against false accusations (something that occurs regularly), little oversight, few remedies for the website, plus the fact that a domestic US law would apply internationally due to the peculiarities of the Internet and US legal definitions.

Continue reading

Recent Comments

Authors

Affiliations