Cross Post: Privacy is a Collective Concern: When We Tell Companies About Ourselves, We Give Away Details About Others, Too.
BY CARISSA VÉLIZ
This article was originally published in New Statesman America

GETTY IMAGES / JUSTIN SULLIVAN
People often give a personal explanation of whether they protect the privacy of their data. Those who don’t care much about privacy might say that they have nothing to hide. Those who do worry about it might say that keeping their personal data safe protects them from being harmed by hackers or unscrupulous companies. Both positions assume that caring about and protecting one’s privacy is a personal matter. This is a common misunderstanding.
It’s easy to assume that because some data is “personal”, protecting it is a private matter. But privacy is both a personal and a collective affair, because data is rarely used on an individual basis. Continue reading
Cross Post: Think Twice Before Sending Facebook Your Nude Photos: The Shadow Brokers’ Disclosures Prove Privacy and Security Are Not a Zero-Sum Game
Written by Dr Carissa Veliz
This article first appeared in El Pais
Time and again, we have been sold the story that we need to give up privacy in exchange for security. According to former NSA security consultant Ed Giorgio, ‘Privacy and security are a zero-sum game’—meaning that for every increase in one, there is a decrease in the other. The go-to argument to justify mass surveillance, then, is that sacrificing our privacy is necessary for government agencies to be able to protect us from the bad guys. Continue reading
Hide your face?
A start-up claims it can identify whether a face belongs to a high-IQ person, a good poker player, a terrorist, or a pedophile. Faception uses machine-learning to generate classifiers that signal whether a face belongs in one category or not. Basically facial appearance is used to predict personality traits, type, or behaviors. The company claims to already have sold technology to a homeland security agency to help identify terrorists. It does not surprise me at all: governments are willing to buy remarkably bad snake-oil. But even if the technology did work, it would be ethically problematic.
Global surveillance is not about privacy
It has now been almost two years since Snowden. It’s time for us to admit this has little to do with privacy. Global surveillance is not global only because it targets people all over the world. Global surveillance is done for and against global interests. Privacy, by contrast, is an individual right. It’s simply the wrong description level. This is not about your internet history or private phone calls, even if the media and Snowden wish it were.
Privacy is rarely seen as a fundamental right. Privacy is relevant insofar as it enables control, harming freedom, or insofar as it causes the violation of a fundamental right. But the capabilities of intelligence agencies to carry out surveillance over their own citizens are far lower than their capability to monitor foreigners. Any control this monitoring might entail will never be at the individual level; governments can’t exert direct control over individual citizens of foreign countries.
.
Framing this as an issue of individual privacy is a strategic move done against the interests of individuals. Continue reading
Computer vision and emotional privacy
A study published last week (and summarized here and here) demonstrated that a computer could be trained to detect real versus faked facial expressions of pain significantly better than humans. Participants were shown video clips of the faces of people actually in pain (elicited by submerging their arms in icy water) and clips of people simulating pain (with their arms in warm water). The participants had to indicate for each clip whether the expression of pain was genuine or faked.
Whilst human observers could not discriminate real expressions of pain from faked expression better than chance, a computer vision system that automatically measured facial movements and performed pattern recognition on those movements attained 85% accuracy. Even when the human participants practiced, accuracy only increased to 55%.
The authors explain that the system could also be trained to recognize other potentially deceptive actions involving a facial component. They say:
In addition to detecting pain malingering, our computer vision approach maybe used to detect other real-world deceptive actions in the realm of homeland security, psychopathology, job screening, medicine, and law. Like pain, these scenarios also generate strong emotions, along with attempts to minimize, mask, and fake such emotions, which may involve dual control of the face. In addition, our computer vision system can be applied to detect states in which the human face may provide important clues about health, physiology, emotion, or thought, such as drivers’ expressions of sleepiness and students’ expressions of attention and comprehension of lectures, or to track response to treatment of affective disorders.
The possibility of using this technology to detect when someone’s emotional expressions are genuine or not raises interesting ethical questions. I will outline and give preliminary comments on a few of the issues: Continue reading
Lying in the least untruthful manner: surveillance and trust
When I last blogged about the surveillance scandal in June, I argued that the core problem was the reasonable doubts we have about whether the oversight is functioning properly, and that the secrecy makes these doubts worse. Since then a long list of new revelations have arrived. To me, what matters is not so much whether foreign agencies get secretly paid to spy, doubts about internal procedures or how deeply software can peer into human lives, but how these revelations put a lie to many earlier denials. In an essay well worth reading Bruce Schneier points out that this pattern of deception severely undermines our trust in the authorities, and this is an important social risk: democracies and market economies require us to trust politicians and companies to an appropriate extent.
On being private in public
We all know that we are under CCTV surveillance on many occasions each day, particularly when we are in public places. For the most part we accept that being – or potentially being – watched in public places is a reasonable price to pay for the security that 24-hour surveillance offers. However, we also have expectations about what is done with CCTV footage, when, and by whom. A recent discussion with a friend threw up some interesting questions about the nature of these expectations and their reasonableness.
My friend works in a bar where, unsurprisingly, there are several CCTV cameras. Everyone knows where these cameras are and that they are permanently in operation – there is not supposed to be any secrecy. Whilst the primary purpose of the cameras is to promote security, a member of the management team has begun to use them in a way that could be seen as ethically problematic: she logs on to view the footage in real-time, remotely, at her home. In addition to watching the footage, the manager has also addressed points of staff discipline based on what she sees. Perhaps particularly troubling is that she has commented on the way a member of staff behaved when no one was around – when the member of staff thought that she was ‘alone’. Continue reading
Enlightened surveillance?
New York City contemplates using aerial drones for surveillance purposes, while North Korea buys thousands of cameras to spy on its impoverished population. Britain has so many cameras they cease being newsworthy. The stories multiply – it is trivial to note we are moving towards a surveillance society.
In an earlier post, I suggested surrendering on surveillance might be the least bad option – of all likely civil liberty encroachments, this seemed the less damaging and hardest to resist. But that’s an overly defensive way of phrasing it – if ubiquitous surveillance and lack of privacy are the trends of the future, we shouldn’t just begrudgingly accept them, but demand that society gets the most possible out of them. In this post, I’m not going to suggest how to achieve enlightened surveillance (a 360 degree surveillance would be a small start, for instance), but just outline some of the positive good we could get from it. We all know the negatives; but what good could come from corporations, governments and neighbours being able to peer continually into your bedroom (and efficiently process that data)? In the ideal case, how could we make it work for us? Continue reading
Recent Comments