Cross Post: Privacy is a Collective Concern: When We Tell Companies About Ourselves, We Give Away Details About Others, Too.
BY CARISSA VÉLIZ
This article was originally published in New Statesman America

GETTY IMAGES / JUSTIN SULLIVAN
People often give a personal explanation of whether they protect the privacy of their data. Those who don’t care much about privacy might say that they have nothing to hide. Those who do worry about it might say that keeping their personal data safe protects them from being harmed by hackers or unscrupulous companies. Both positions assume that caring about and protecting one’s privacy is a personal matter. This is a common misunderstanding.
It’s easy to assume that because some data is “personal”, protecting it is a private matter. But privacy is both a personal and a collective affair, because data is rarely used on an individual basis. Continue reading
A Sad Victory
I recently watched the documentary AlphaGo, directed by Greg Kohs. The film tells the story of the refinement of AlphaGo—a computer Go program built by DeepMind—and tracks the match between AlphaGo and 18-time world champion in Go Lee Sedol.
Go is an ancient Chinese board game. It was considered one of the four essential arts of aristocratic Chinese scholars. The goal is to end the game having captured more territory than your opponent. What makes Go a particularly interesting game for AI to master is, first, its complexity. Compared to chess, Go has a larger board, and many more alternatives to consider per move. The number of possible moves in a given position is about 20 in chess; in Go, it’s about 200. The number of possible configurations of the board is more than the number of atoms in the universe. Second, Go is a game in which intuition is believed to play a big role. When professionals get asked why they played a particular move, they will often respond something to the effect that ‘it felt right’. It is this intuitive quality why Go is sometimes considered an art, and Go players artists. For a computer program to beat human Go players, then, it would have to mimic human intuition (or, more precisely, mimic the results of human intuition).
Cross Post: What If Banks Were the Main Protectors of Customers’ Private Data?
Written by Carissa Véliz
Dr Carissa Véliz, Oxford Uehiro Centre research fellow, has recently published a provocative article in the Harvard Business Review:
The ability to collect and exploit consumers’ personal data has long been a source of competitive advantage in the digital economy. It is their control and use of this data that has enabled the likes of Google, Amazon, Alibaba, and Facebook to dominate online markets.
But consumers are increasingly concerned about the vulnerability that comes with surrendering data. A growing number of cyberattacks — the 2017 hacking of credit watch company Experian being a case in point, not to mention the likely interference by Russian government sponsored hackers in the 2016 US Presidential elections — have triggered something of a “techlash”.
Even without these scandals, it is likely that sooner or later every netizen will have suffered at some point from a bad data experience: from their credit card number being stolen, to their account getting hacked, or their personal details getting exposed; from suffering embarrassment from an inappropriate ad while at work, to realizing that their favorite airline is charging them more than they charge others for the same flight.
See here for the full article, and to join in the conversation.
Cross Post: Common Sense for A.I. Is a Great Idea. But it’s Harder Than it Sounds.
Written by Carissa Veliz
Crosspost from Slate. Click here to read the full article
At the moment, artificial intelligence may have perfect memories and be better at arithmetic than us, but they are clueless. It takes a few seconds of interaction with any digital assistant to realize one is not in the presence of a very bright interlocutor. Among some of the unexpected items users have found in their shopping lists after talking to (or near) Amazon’s Alexa are 150,000 bottles of shampoo, sled dogs, “hunk of poo,” and a girlfriend.
The mere exasperation of talking to a digital assistant can be enough to miss human companionship, feel nostalgia of all things analog and dumb, and foreswear any future attempts at communicating with mindless pieces of metal inexplicably labelled “smart.” (Not to mention all the privacy issues.) A.I. not understanding what a shopping list is, and the kinds of items that are appropriate to such lists, is evidence of a much broader problem: They lack common sense.
The Allen Institute for Artificial Intelligence, or AI2, created by Microsoft co-founder Paul Allen, has announced it is embarking on a new research $125 million initiative to try to change that. “To make real progress in A.I., we have to overcome the big challenges in the area of common sense,” Allen told the New York Times. AI2 takes common sense to include the “infinite set of facts, heuristics, observations … that we bring to the table when we address a problem, but the computer doesn’t.” Researchers will use a combination of crowdsourcing, machine learning, and machine vision to create a huge “repository of knowledge” that will bring about common sense. Of paramount importance among its uses is to get A.I. to “understand what’s harmful to people.”
This article was originally published on Slate. To read the full article and to join in the conversation please follow this link.
Cross Post: Think Twice Before Sending Facebook Your Nude Photos: The Shadow Brokers’ Disclosures Prove Privacy and Security Are Not a Zero-Sum Game
Written by Dr Carissa Veliz
This article first appeared in El Pais
Time and again, we have been sold the story that we need to give up privacy in exchange for security. According to former NSA security consultant Ed Giorgio, ‘Privacy and security are a zero-sum game’—meaning that for every increase in one, there is a decrease in the other. The go-to argument to justify mass surveillance, then, is that sacrificing our privacy is necessary for government agencies to be able to protect us from the bad guys. Continue reading
Cross Post: Why you might want to think twice about surrendering online privacy for the sake of convenience
Written by Carissa Veliz
DPhil Candidate in Philosophy, Uehiro Centre for Practical Ethics, University of Oxford
This article was originally published in The Conversation

It is inconvenient to guard one’s privacy, and the better one protects it, the more inconvenience one must endure. Enjoying privacy, at a minimum, demands installing software to block tracking online, using long and different passwords for online services, remembering to turn off the WiFi and Bluetooth signals on your mobile phone when leaving the house, using cash, and so on. Continue reading
The Panama Papers: How much financial privacy should the super rich be allowed to enjoy?
The Panama Papers comprise a leak of 11.5 million files from Mossack Fonseca, the world’s fourth biggest offshore law firm. The leak has tainted the reputations of many celebrities, and some public officials have been forced to resign, including Icelandic Prime Minister Sigmundur Davíð Gunnlaugsoon, and Spanish Industry Minister José Manuel Soria.
Ramón Fonseca, Director of Mossack Fonseca, complained that his firm was the victim of “an international campaign against privacy.” At a time where privacy does seem to be under attack on all fronts, it is relevant to ask whether the super rich ought to be able to enjoy financial privacy with respect to their offshore accounts. Continue reading
A jobless world—dystopia or utopia?
There is no telling what machines might be able to do in the not very distant future. It is humbling to realise how wrong we have been in the past at predicting the limits of machine capabilities.
We once thought that it would never be possible for a computer to beat a world champion in chess, a game that was thought to be the expression of the quintessence of human intelligence. We were proven wrong in 1997, when Deep Blue beat Garry Kasparov. Once we came to terms with the idea that computers might be able to beat us at any intellectual game (including Jeopardy!, and more recently, Go), we thought that surely they would be unable to engage in activities where we typically need to use common sense and coordination to physically respond to disordered conditions, as when we drive. Driverless cars are now a reality, with Google trying to commercialise them by 2020.
Machines assist doctors in exploring treatment options, they score tests, plant and pick crops, trade stocks, store and retrieve our documents, process information, and play a crucial role in the manufacturing of almost every product we buy.
As machines become more capable, there are more incentives to replace human workers with computers and robots. Computers do not ask for a decent wage, they do not need rest or sleep, they do not need health benefits, they do not complain about how their superiors treat them, and they do not steal or laze away.
Some thoughts on reparations
Consider the following case. Imagine you inherit a fortune from your parents. With that money, you buy a luxurious house and you pay to get a good education, which later allows you to find a job where you earn a decent salary. Many years later, you find out that your parents made their fortune through a very bad act—say, defrauding someone. You also find out that the scammed person and his family lived an underprivileged life from that moment on.
What do you think you would need to do to fulfill your moral obligations?
If you want to do the most good, maybe you shouldn’t work for Wall Street
Suppose you are an altruistically minded person who is adamant about doing the most good you possibly can. If you are lucky enough to have a wide range of options, what career should you choose?
Two years ago, William MacAskill, President of 80,000 hours, a non-profit organisation focused on “enabling people to make a bigger difference with their career,” suggested you steer clear of charity work and aim for Wall Street. He called this approach earning to give. A couple of days ago, MacAskill has published a blog post where he admits that heavily pushing for the idea of earning to give was “a marketing strategy,” and that, although 80,000 hours did believe that “at least a large proportion of people” should become high-earners in order to donate more money, placing so much emphasis on this idea may have been mistaken. The 80,000 hours page on earning to give now reads: “This page was last updated in 2012 and no-longer fully reflects our views.” MacAskill’s current point of view is that only a “small proportion” of people should strive to earn to give. Continue reading
Recent Comments