Carissa Veliz’s Posts

Cross Post: What If Banks Were the Main Protectors of Customers’ Private Data?

Written by Carissa Véliz

Dr Carissa Véliz, Oxford Uehiro Centre research fellow, has recently published a provocative article in the Harvard Business Review:

The ability to collect and exploit consumers’ personal data has long been a source of competitive advantage in the digital economy. It is their control and use of this data that has enabled the likes of Google, Amazon, Alibaba, and Facebook to dominate online markets.

But consumers are increasingly concerned about the vulnerability that comes with surrendering data. A growing number of cyberattacks — the 2017 hacking of credit watch company Experian being a case in point, not to mention the likely interference by Russian government sponsored hackers in the 2016 US Presidential elections — have triggered something of a “techlash”.

Even without these scandals, it is likely that sooner or later every netizen will have suffered at some point from a bad data experience: from their credit card number being stolen, to their account getting hacked, or their personal details getting exposed; from suffering embarrassment from an inappropriate ad while at work, to realizing that their favorite airline is charging them more than they charge others for the same flight.

See here for the full article, and to join in the conversation.

Cross Post: Common Sense for A.I. Is a Great Idea. But it’s Harder Than it Sounds.

Written by Carissa Veliz

Crosspost from Slate.  Click here to read the full article

At the moment, artificial intelligence may have perfect memories and be better at arithmetic than us, but they are clueless. It takes a few seconds of interaction with any digital assistant to realize one is not in the presence of a very bright interlocutor. Among some of the unexpected items users have found in their shopping lists after talking to (or near) Amazon’s Alexa are 150,000 bottles of shampoo, sled dogs, “hunk of poo,” and a girlfriend.

The mere exasperation of talking to a digital assistant can be enough to miss human companionship, feel nostalgia of all things analog and dumb, and foreswear any future attempts at communicating with mindless pieces of metal inexplicably labelled “smart.” (Not to mention all the privacy issues.) A.I. not understanding what a shopping list is, and the kinds of items that are appropriate to such lists, is evidence of a much broader problem: They lack common sense.

The Allen Institute for Artificial Intelligence, or AI2, created by Microsoft co-founder Paul Allen, has announced it is embarking on a new research $125 million initiative to try to change that. “To make real progress in A.I., we have to overcome the big challenges in the area of common sense,” Allen told the New York Times. AI2 takes common sense to include the “infinite set of facts, heuristics, observations … that we bring to the table when we address a problem, but the computer doesn’t.” Researchers will use a combination of crowdsourcing, machine learning, and machine vision to create a huge “repository of knowledge” that will bring about common sense. Of paramount importance among its uses is to get A.I. to “understand what’s harmful to people.”

This article was originally published on Slate.  To read the full article and to join in the conversation please follow this link.

Cross Post: Think Twice Before Sending Facebook Your Nude Photos: The Shadow Brokers’ Disclosures Prove Privacy and Security Are Not a Zero-Sum Game

 

Written by Dr Carissa Veliz

This article first appeared in El Pais

 

Time and again, we have been sold the story that we need to give up privacy in exchange for security. According to former NSA security consultant Ed Giorgio, ‘Privacy and security are a zero-sum game’—meaning that for every increase in one, there is a decrease in the other. The go-to argument to justify mass surveillance, then, is that sacrificing our privacy is necessary for government agencies to be able to protect us from the bad guys. Continue reading

Cross Post: Why you might want to think twice about surrendering online privacy for the sake of convenience

Written by Carissa Veliz

DPhil Candidate in Philosophy, Uehiro Centre for Practical Ethics, University of Oxford

This article was originally published in The Conversation

Just a click away once you tick this too-long-to-read privacy agreement. Shutterstock

It is inconvenient to guard one’s privacy, and the better one protects it, the more inconvenience one must endure. Enjoying privacy, at a minimum, demands installing software to block tracking online, using long and different passwords for online services, remembering to turn off the WiFi and Bluetooth signals on your mobile phone when leaving the house, using cash, and so on. Continue reading

The Panama Papers: How much financial privacy should the super rich be allowed to enjoy?

The Panama Papers comprise a leak of 11.5 million files from Mossack Fonseca, the world’s fourth biggest offshore law firm. The leak has tainted the reputations of many celebrities, and some public officials have been forced to resign, including Icelandic Prime Minister Sigmundur Davíð Gunnlaugsoon, and Spanish Industry Minister José Manuel Soria.

Ramón Fonseca, Director of Mossack Fonseca, complained that his firm was the victim of “an international campaign against privacy.” At a time where privacy does seem to be under attack on all fronts, it is relevant to ask whether the super rich ought to be able to enjoy financial privacy with respect to their offshore accounts. Continue reading

A jobless world—dystopia or utopia?

There is no telling what machines might be able to do in the not very distant future. It is humbling to realise how wrong we have been in the past at predicting the limits of machine capabilities.

We once thought that it would never be possible for a computer to beat a world champion in chess, a game that was thought to be the expression of the quintessence of human intelligence. We were proven wrong in 1997, when Deep Blue beat Garry Kasparov. Once we came to terms with the idea that computers might be able to beat us at any intellectual game (including Jeopardy!, and more recently, Go), we thought that surely they would be unable to engage in activities where we typically need to use common sense and coordination to physically respond to disordered conditions, as when we drive. Driverless cars are now a reality, with Google trying to commercialise them by 2020.

Machines assist doctors in exploring treatment options, they score tests, plant and pick crops, trade stocks, store and retrieve our documents, process information, and play a crucial role in the manufacturing of almost every product we buy.

As machines become more capable, there are more incentives to replace human workers with computers and robots. Computers do not ask for a decent wage, they do not need rest or sleep, they do not need health benefits, they do not complain about how their superiors treat them, and they do not steal or laze away.

Continue reading

Some thoughts on reparations

Consider the following case. Imagine you inherit a fortune from your parents. With that money, you buy a luxurious house and you pay to get a good education, which later allows you to find a job where you earn a decent salary. Many years later, you find out that your parents made their fortune through a very bad act—say, defrauding someone. You also find out that the scammed person and his family lived an underprivileged life from that moment on.

What do you think you would need to do to fulfill your moral obligations?

Continue reading

If you want to do the most good, maybe you shouldn’t work for Wall Street

Suppose you are an altruistically minded person who is adamant about doing the most good you possibly can. If you are lucky enough to have a wide range of options, what career should you choose?

Two years ago, William MacAskill, President of 80,000 hours, a non-profit organisation focused on “enabling people to make a bigger difference with their career,” suggested you steer clear of charity work and aim for Wall Street. He called this approach earning to give. A couple of days ago, MacAskill has published a blog post where he admits that heavily pushing for the idea of earning to give was “a marketing strategy,” and that, although 80,000 hours did believe that “at least a large proportion of people” should become high-earners in order to donate more money, placing so much emphasis on this idea may have been mistaken. The 80,000 hours page on earning to give now reads: “This page was last updated in 2012 and no-longer fully reflects our views.” MacAskill’s current point of view is that only a “small proportion” of people should strive to earn to give. Continue reading

What to do with Google—nothing, break it up, nationalise it, turn it into a public utility, treat it as a public space, or something else?

Google has become a service that one cannot go without if one wants to be a well-adapted participant in society. For many, Google is the single most important source of information. Yet people do not have any understanding of the way Google individually curates contents for its users. Its algorithms are secret. For the past year, and as a result of the European Court of Justice’s ruling on the right to be forgotten, Google has been deciding which URLs to delist from its search results on the basis of personal information being “inaccurate, inadequate or no longer relevant.” The search engine has reported that it has received over 250,000 individual requests concerning 1 million URLs in the past year, and that it has delisted around 40% of the URLs that it has reviewed. As was made apparent in a recent open letter from 80 academics urging Google for more transparency, the criteria being used to make these decisions are also secret. We have no idea about what sort of information typically gets delisted, and in what countries. The academics signing the letter point out how Google has been charged with the task of balancing privacy and access to information, thereby shaping public discourse, without facing any kind of public scrutiny. Google rules over us but we have no knowledge of what the rules are.

Continue reading

Is privacy to blame for the Germanwings tragedy?

Since it was revealed that Andreas Lubitz—the co-pilot thought to be responsible for voluntarily crashing Germanwings Flight 9525 and killing 149 people—suffered from depression, a debate has ensued over whether privacy laws regarding medical records in Germany should be less strict when it comes to professions that carry special responsibilities.

Continue reading

Authors

Subscribe Via Email

Affiliations