Kuwait is planning to build a complete DNA database of not just citizens but all other residents and temporary visitors. The motivation is claimed to be antiterrorism (the universal motivation!) and fighting crime. Many are outraged, from local lawyers over a UN human rights committee to the European Society of Human Genetics, and think that it will not be very helpful against terrorism (how does having the DNA of a suicide bomber help after the fact?) Rather, there are reasons to worry about misuse in paternity testing (Kuwait has strict adultery laws), and in the politics of citizenship (which provides many benefits): it is strictly circumscribed to paternal descendants of the original Kuwaiti settlers, and there is significant discrimination against people with no recognized paternity such as the Bidun minority. Plus, and this might be another strong motivation for many of the scientists protesting against the law, it might put off public willingness to donate their genomes into research databases where they actually do some good. Obviously it might also put visitors off visiting – would, for example, foreign heads of state accept leaving their genome in the hands of another state? Not to mention the discovery of adultery in ruling families – there is a certain gamble in doing this.
Overall, it seems few outside the Kuwaiti government are cheering for the law. When I recently participated in a panel discussion organised by the BSA at the Wellcome Collection about genetic privacy, at the question “Would anybody here accept mandatory genetic collection?” only one or two hands rose in the large audience. When would it make sense to make mandatory genetic information collection? Continue reading
Carissa Véliz on how our privacy is threatened when we use smartphones, computers, and the internet.
Smartphones are like spies in our pocket; we should cover the camera and microphone of our laptops; it is difficult to opt out of services like Facebook that track us on the internet; IMSI-catchers can ‘vacuum’ data from our smartphones; data brokers may sell our internet profile to criminals and/or future employees; and yes, we should protect people’s privacy even if they don’t care about it. Carissa Véliz (University of Oxford) warns us: we should act now before it is too late. Privacy damages accumulate, and, in many cases, are irreversible. We urgently need more regulations to protect our privacy.
The Panama Papers comprise a leak of 11.5 million files from Mossack Fonseca, the world’s fourth biggest offshore law firm. The leak has tainted the reputations of many celebrities, and some public officials have been forced to resign, including Icelandic Prime Minister Sigmundur Davíð Gunnlaugsoon, and Spanish Industry Minister José Manuel Soria.
Ramón Fonseca, Director of Mossack Fonseca, complained that his firm was the victim of “an international campaign against privacy.” At a time where privacy does seem to be under attack on all fronts, it is relevant to ask whether the super rich ought to be able to enjoy financial privacy with respect to their offshore accounts. Continue reading
Written By: Roy Gilbar, Netanya Academic College, Israel, and Charles Foster
In the recent case of ABC v St. George’s Healthcare NHS Trust and others,1 [http://www.bailii.org/ew/cases/EWHC/QB/2015/1394.html] a High Court judge decided that:
(a) where the defendants (referred to here jointly as ‘X’) knew that Y, a prisoner, was suffering from Huntingdon’s Disease (‘HD’); and
(b) X knew that Y had refused permission to tell Y’s daughter, Z (the claimant), that he had HD (and accordingly that there was a 50% chance that Z had it (and that if Z had it there was, correspondingly, a 50% chance that the fetus she was then carrying would have HD),
X had no duty to tell Z that Y was suffering from HD. Z said that if she had known of Y’s condition, she would have had an abortion. Continue reading
Since it was revealed that Andreas Lubitz—the co-pilot thought to be responsible for voluntarily crashing Germanwings Flight 9525 and killing 149 people—suffered from depression, a debate has ensued over whether privacy laws regarding medical records in Germany should be less strict when it comes to professions that carry special responsibilities.
It has now been almost two years since Snowden. It’s time for us to admit this has little to do with privacy. Global surveillance is not global only because it targets people all over the world. Global surveillance is done for and against global interests. Privacy, by contrast, is an individual right. It’s simply the wrong description level. This is not about your internet history or private phone calls, even if the media and Snowden wish it were.
Privacy is rarely seen as a fundamental right. Privacy is relevant insofar as it enables control, harming freedom, or insofar as it causes the violation of a fundamental right. But the capabilities of intelligence agencies to carry out surveillance over their own citizens are far lower than their capability to monitor foreigners. Any control this monitoring might entail will never be at the individual level; governments can’t exert direct control over individual citizens of foreign countries.
Framing this as an issue of individual privacy is a strategic move done against the interests of individuals. Continue reading
Facebook has changed its privacy settings this January. For Europeans, the changes have come into effect on January 30, 2015.
Apart from collecting data from your contacts, the information you provide, and from everything you see and do in Facebook, the new data policy enables the Facebook app to use your GPS, Bluetooth, and WiFi signals to track your location at all times. Facebook may also collect information about payments you make (including billing, shipping, and contact details). Finally, the social media giant collects data from third-party partners, other Facebook companies (like Instagram and Whatsapp), and from websites and apps that use their services (websites that offer “Like” buttons and use Facebook Log In).
The result? Facebook will now know where you live, work, and travel, what and where you shop, whom you are with, and roughly what your purchasing power is. It will have more information than anyone in your life about your habits, likes and dislikes, political inclinations, concerns, and, depending on the kind of use you make of the Internet, it might come to know about such sensitive issues as medical conditions and sexual preferences.
A closer look, however, might reveal the matter in a different light. Continue reading
This week, a landmark ruling from the European Court of Justice held that a Directive of the European Parliament entailed that Internet search engines could, in some circumstances, be legally required (on request) to remove links to personal data that have become irrelevant or inadequate. The justification underlying this decision has been dubbed the ‘right to be forgotten’.
The ruling came in response to a case in which a Spanish gentleman (I was about to write his name but then realized that to do so would be against the spirit of the ruling) brought a complaint against Google. He objected to the fact that if people searched for his name in Google Search, the list of results displayed links to information about his house being repossessed in recovery of social security debts that he owed. The man requested that Google Spain or Google Inc. be required to remove or conceal the personal data relating to him so that the data no longer appeared in the search results. His principal argument was that the attachment proceedings concerning him had been fully resolved for a number of years and that reference to them was now entirely irrelevant. Continue reading
A study published last week (and summarized here and here) demonstrated that a computer could be trained to detect real versus faked facial expressions of pain significantly better than humans. Participants were shown video clips of the faces of people actually in pain (elicited by submerging their arms in icy water) and clips of people simulating pain (with their arms in warm water). The participants had to indicate for each clip whether the expression of pain was genuine or faked.
Whilst human observers could not discriminate real expressions of pain from faked expression better than chance, a computer vision system that automatically measured facial movements and performed pattern recognition on those movements attained 85% accuracy. Even when the human participants practiced, accuracy only increased to 55%.
The authors explain that the system could also be trained to recognize other potentially deceptive actions involving a facial component. They say:
In addition to detecting pain malingering, our computer vision approach maybe used to detect other real-world deceptive actions in the realm of homeland security, psychopathology, job screening, medicine, and law. Like pain, these scenarios also generate strong emotions, along with attempts to minimize, mask, and fake such emotions, which may involve dual control of the face. In addition, our computer vision system can be applied to detect states in which the human face may provide important clues about health, physiology, emotion, or thought, such as drivers’ expressions of sleepiness and students’ expressions of attention and comprehension of lectures, or to track response to treatment of affective disorders.
The possibility of using this technology to detect when someone’s emotional expressions are genuine or not raises interesting ethical questions. I will outline and give preliminary comments on a few of the issues: Continue reading