Skip to content

privacy

Global surveillance is not about privacy

putin-merkel-obama-caricatureIt has now been almost two years since Snowden. It’s time for us to admit this has little to do with privacy. Global surveillance is not global only because it targets people all over the world. Global surveillance is done for and against global interests. Privacy, by contrast, is an individual right. It’s simply the wrong description level. This is not about your internet history or private phone calls, even if the media and Snowden wish it were.

Privacy is rarely seen as a fundamental right. Privacy is relevant insofar as it enables control, harming freedom, or insofar as it causes the violation of a fundamental right. But the capabilities of intelligence agencies to carry out surveillance over their own citizens are far lower than their capability to monitor foreigners. Any control this monitoring might entail will never be at the individual level; governments can’t exert direct control over individual citizens of foreign countries.

.

Framing this as an issue of individual privacy is a strategic move done against the interests of individuals. Read More »Global surveillance is not about privacy

Facebook’s new Terms of Service: Choosing between your privacy and your relationships

Facebook has changed its privacy settings this January. For Europeans, the changes have come into effect on January 30, 2015.

Apart from collecting data from your contacts, the information you provide, and from everything you see and do in Facebook, the new data policy enables the Facebook app to use your GPS, Bluetooth, and WiFi signals to track your location at all times. Facebook may also collect information about payments you make (including billing, shipping, and contact details). Finally, the social media giant collects data from third-party partners, other Facebook companies (like Instagram and Whatsapp), and from websites and apps that use their services (websites that offer “Like” buttons and use Facebook Log In).

The result? Facebook will now know where you live, work, and travel, what and where you shop, whom you are with, and roughly what your purchasing power is. It will have more information than anyone in your life about your habits, likes and dislikes, political inclinations, concerns, and, depending on the kind of use you make of the Internet, it might come to know about such sensitive issues as medical conditions and sexual preferences.

To Facebook’s credit, their new terms of services, although ambiguous, are clearer than most terms of services one finds on the Internet. Despite the intrusiveness of the privacy policy, one may look benevolently on Facebook: if their terms are comparatively explicit and clear, if users know about them and give their consent, and if in turn the company provides a valuable free service to more than a billion users, why should the new privacy policy be frowned upon? After all, if people don’t like the new terms, they are not forced to use Facebook: they are free not to sign up or they can delete their account if they are current users.

A closer look, however, might reveal the matter in a different light. Read More »Facebook’s new Terms of Service: Choosing between your privacy and your relationships

On the ‘right to be forgotten’

This week, a landmark ruling from the European Court of Justice held that a Directive of the European Parliament entailed that Internet search engines could, in some circumstances, be legally required (on request) to remove links to personal data that have become irrelevant or inadequate. The justification underlying this decision has been dubbed the ‘right to be forgotten’.

The ruling came in response to a case in which a Spanish gentleman (I was about to write his name but then realized that to do so would be against the spirit of the ruling) brought a complaint against Google. He objected to the fact that if people searched for his name in Google Search, the list of results displayed links to information about his house being repossessed in recovery of social security debts that he owed. The man requested that Google Spain or Google Inc. be required to remove or conceal the personal data relating to him so that the data no longer appeared in the search results. His principal argument was that the attachment proceedings concerning him had been fully resolved for a number of years and that reference to them was now entirely irrelevant.Read More »On the ‘right to be forgotten’

Computer vision and emotional privacy

A study published last week (and summarized here and here) demonstrated that a computer could be trained to detect real versus faked facial expressions of pain significantly better than humans. Participants were shown video clips of the faces of people actually in pain (elicited by submerging their arms in icy water) and clips of people simulating pain (with their arms in warm water). The participants had to indicate for each clip whether the expression of pain was genuine or faked.

Whilst human observers could not discriminate real expressions of pain from faked expression better than chance, a computer vision system that automatically measured facial movements and performed pattern recognition on those movements attained 85% accuracy. Even when the human participants practiced, accuracy only increased to 55%.

The authors explain that the system could also be trained to recognize other potentially deceptive actions involving a facial component. They say:

In addition to detecting pain malingering, our computer vision approach maybe used to detect other real-world deceptive actions in the realm of homeland security, psychopathology, job screening, medicine, and law. Like pain, these scenarios also generate strong emotions, along with attempts to minimize, mask, and fake such emotions, which may involve dual control of the face. In addition, our computer vision system can be applied to detect states in which the human face may provide important clues about health, physiology, emotion, or thought, such as drivers’ expressions of sleepiness and students’ expressions of attention and comprehension of lectures, or to track response to treatment of affective disorders.

The possibility of using this technology to detect when someone’s emotional expressions are genuine or not raises interesting ethical questions. I will outline and give preliminary comments on a few of the issues:Read More »Computer vision and emotional privacy

How to get positive surveillance – a few ideas

I recently published an article on the possible upsides of mass surveillance (somewhat in the vein of David Brin’s “transparent society”). To nobody’s great astonishment, it has attracted criticism! Some of them accuse me of not knowing the negative aspects of surveillance. But that was not the article’s point; there is already a lot written on the negative aspects (Bruce Schneier and Cory Doctorow, for instance, have covered this extremely well). Others make the point that though these benefits may be conceivable in principle, I haven’t shown how they could be obtained in practice.

Again, that wasn’t the point of the article. But it’s a fair criticism – what can we do today to make a better surveillance outcomes more likely? Since I didn’t have space to go through that in my article, here are a few suggestions:Read More »How to get positive surveillance – a few ideas

A reply to ‘Facebook: You are your ‘Likes”

Yesterday, Charles Foster discussed the recent study showing that Facebook ‘Likes’ can be plugged into an algorithm to predict things about people – things about their demographics, their habits and their personalities – that they didn’t explicitly disclose. Charles argued that, even though the individual ‘Likes’ were voluntarily published, to use an algorithm to generate further predictions would be unethical on the grounds that individuals have not consented to it and, consequently, that to go ahead and do it anyway is a violation of their privacy.

I wish to make three points contesting his strong conclusion, instead offering a more qualified position: simply running the algorithm on publically available ‘Likes’ data is not unethical, even if no consent has been given. Doing particular things based on the output of the algorithm, however, might be.Read More »A reply to ‘Facebook: You are your ‘Likes”

On being private in public

We all know that we are under CCTV surveillance on many occasions each day, particularly when we are in public places. For the most part we accept that being – or potentially being – watched in public places is a reasonable price to pay for the security that 24-hour surveillance offers. However, we also have expectations about what is done with CCTV footage, when, and by whom. A recent discussion with a friend threw up some interesting questions about the nature of these expectations and their reasonableness.

My friend works in a bar where, unsurprisingly, there are several CCTV cameras. Everyone knows where these cameras are and that they are permanently in operation – there is not supposed to be any secrecy. Whilst the primary purpose of the cameras is to promote security, a member of the management team has begun to use them in a way that could be seen as ethically problematic: she logs on to view the footage in real-time, remotely, at her home. In addition to watching the footage, the manager has also addressed points of staff discipline based on what she sees. Perhaps particularly troubling is that she has commented on the way a member of staff behaved when no one was around – when the member of staff thought that she was ‘alone’.Read More »On being private in public

Asking the right questions: big data and civil rights

Alastair Croll has written a thought-provoking article, Big data is our generation’s civil rights issue, and we don’t know it. His basic argument is that the new economics of collecting and analyzing data has led to a change in how it is used. Once it was expensive to collect, so only data needed to answer particular questions was collected. Today it is cheap to collect, so it can be collected first and then analyzed – “we collect first and ask questions later”. This means that the questions asked can be very different from the questions the data seem to be about, and in many cases they can be problematic. Race, sexual orientation, health or political views – important for civil rights – can be inferred from apparently innocuous information provided for other purposes – names, soundtracks, word usage, purchases, and search queries.

The problem as he notes is that in order to handle this new situation is that we need to tie link what the data is with how it can be used. And this cannot be done just technologically, but requires societal norms and regulations. What kinds of ethics do we need to safeguard civil rights in a world of big data?

Croll states:

…governments need to balance reliance on data with checks and balances about how this reliance erodes privacy and creates civil and moral issues we haven’t thought through. It’s something that most of the electorate isn’t thinking about, and yet it affects every purchase they make.
This should be fun.

Read More »Asking the right questions: big data and civil rights

The censor and the eavesdropper: the link between censorship and surveillance

Cory Doctorow makes a simple but important point in the Guardian: censorship today is inseparable from surveillance. In modern media preventing people from seeing proscribed information requires systems that monitor their activity. To implement copyright-protecting censorship in the UK systems must be in place to track where people seek to access and compare it to a denial list, in whatever medium is used.

Read More »The censor and the eavesdropper: the link between censorship and surveillance