A Sad Victory
I recently watched the documentary AlphaGo, directed by Greg Kohs. The film tells the story of the refinement of AlphaGo—a computer Go program built by DeepMind—and tracks the match between AlphaGo and 18-time world champion in Go Lee Sedol.
Go is an ancient Chinese board game. It was considered one of the four essential arts of aristocratic Chinese scholars. The goal is to end the game having captured more territory than your opponent. What makes Go a particularly interesting game for AI to master is, first, its complexity. Compared to chess, Go has a larger board, and many more alternatives to consider per move. The number of possible moves in a given position is about 20 in chess; in Go, it’s about 200. The number of possible configurations of the board is more than the number of atoms in the universe. Second, Go is a game in which intuition is believed to play a big role. When professionals get asked why they played a particular move, they will often respond something to the effect that ‘it felt right’. It is this intuitive quality why Go is sometimes considered an art, and Go players artists. For a computer program to beat human Go players, then, it would have to mimic human intuition (or, more precisely, mimic the results of human intuition).
The Panama Papers: How much financial privacy should the super rich be allowed to enjoy?
The Panama Papers comprise a leak of 11.5 million files from Mossack Fonseca, the world’s fourth biggest offshore law firm. The leak has tainted the reputations of many celebrities, and some public officials have been forced to resign, including Icelandic Prime Minister Sigmundur Davíð Gunnlaugsoon, and Spanish Industry Minister José Manuel Soria.
Ramón Fonseca, Director of Mossack Fonseca, complained that his firm was the victim of “an international campaign against privacy.” At a time where privacy does seem to be under attack on all fronts, it is relevant to ask whether the super rich ought to be able to enjoy financial privacy with respect to their offshore accounts. Continue reading
A jobless world—dystopia or utopia?
There is no telling what machines might be able to do in the not very distant future. It is humbling to realise how wrong we have been in the past at predicting the limits of machine capabilities.
We once thought that it would never be possible for a computer to beat a world champion in chess, a game that was thought to be the expression of the quintessence of human intelligence. We were proven wrong in 1997, when Deep Blue beat Garry Kasparov. Once we came to terms with the idea that computers might be able to beat us at any intellectual game (including Jeopardy!, and more recently, Go), we thought that surely they would be unable to engage in activities where we typically need to use common sense and coordination to physically respond to disordered conditions, as when we drive. Driverless cars are now a reality, with Google trying to commercialise them by 2020.
Machines assist doctors in exploring treatment options, they score tests, plant and pick crops, trade stocks, store and retrieve our documents, process information, and play a crucial role in the manufacturing of almost every product we buy.
As machines become more capable, there are more incentives to replace human workers with computers and robots. Computers do not ask for a decent wage, they do not need rest or sleep, they do not need health benefits, they do not complain about how their superiors treat them, and they do not steal or laze away.
Some thoughts on reparations
Consider the following case. Imagine you inherit a fortune from your parents. With that money, you buy a luxurious house and you pay to get a good education, which later allows you to find a job where you earn a decent salary. Many years later, you find out that your parents made their fortune through a very bad act—say, defrauding someone. You also find out that the scammed person and his family lived an underprivileged life from that moment on.
What do you think you would need to do to fulfill your moral obligations?
If you want to do the most good, maybe you shouldn’t work for Wall Street
Suppose you are an altruistically minded person who is adamant about doing the most good you possibly can. If you are lucky enough to have a wide range of options, what career should you choose?
Two years ago, William MacAskill, President of 80,000 hours, a non-profit organisation focused on “enabling people to make a bigger difference with their career,” suggested you steer clear of charity work and aim for Wall Street. He called this approach earning to give. A couple of days ago, MacAskill has published a blog post where he admits that heavily pushing for the idea of earning to give was “a marketing strategy,” and that, although 80,000 hours did believe that “at least a large proportion of people” should become high-earners in order to donate more money, placing so much emphasis on this idea may have been mistaken. The 80,000 hours page on earning to give now reads: “This page was last updated in 2012 and no-longer fully reflects our views.” MacAskill’s current point of view is that only a “small proportion” of people should strive to earn to give. Continue reading
What to do with Google—nothing, break it up, nationalise it, turn it into a public utility, treat it as a public space, or something else?
Google has become a service that one cannot go without if one wants to be a well-adapted participant in society. For many, Google is the single most important source of information. Yet people do not have any understanding of the way Google individually curates contents for its users. Its algorithms are secret. For the past year, and as a result of the European Court of Justice’s ruling on the right to be forgotten, Google has been deciding which URLs to delist from its search results on the basis of personal information being “inaccurate, inadequate or no longer relevant.” The search engine has reported that it has received over 250,000 individual requests concerning 1 million URLs in the past year, and that it has delisted around 40% of the URLs that it has reviewed. As was made apparent in a recent open letter from 80 academics urging Google for more transparency, the criteria being used to make these decisions are also secret. We have no idea about what sort of information typically gets delisted, and in what countries. The academics signing the letter point out how Google has been charged with the task of balancing privacy and access to information, thereby shaping public discourse, without facing any kind of public scrutiny. Google rules over us but we have no knowledge of what the rules are.
Is privacy to blame for the Germanwings tragedy?
Since it was revealed that Andreas Lubitz—the co-pilot thought to be responsible for voluntarily crashing Germanwings Flight 9525 and killing 149 people—suffered from depression, a debate has ensued over whether privacy laws regarding medical records in Germany should be less strict when it comes to professions that carry special responsibilities.
On holding ethicists to higher moral standards and the value of moral inconsistency
A few weeks ago, Adela Cortina, one of the most important moral philosophers in Spain, was interviewed on the journal El País. “This should be the easiest interview in the world,” said the journalist by way of introduction. Adela Cortina asked why. “Because of your profession. Professors of Ethics never lie, right?” “People assume we are faultless, and when they talk to me they are always justifying themselves. What I work on is something academic, and then, when it comes to life, I try to be consistent with my convictions, but nobody is incorruptible,” she said.
Suppose I tell you that a professor from your local university did something morally reprehensible—cheated on his spouse, failed to pay taxes, or stole money from his department. Suppose that I then tell you this professor is a moral philosopher. Does this further fact make his actions all the more disappointing? I suspect most people think it does. Why is it that ethicists are commonly held to higher moral standards than the rest of the population? Should they be?
Facebook’s new Terms of Service: Choosing between your privacy and your relationships
Facebook has changed its privacy settings this January. For Europeans, the changes have come into effect on January 30, 2015.
Apart from collecting data from your contacts, the information you provide, and from everything you see and do in Facebook, the new data policy enables the Facebook app to use your GPS, Bluetooth, and WiFi signals to track your location at all times. Facebook may also collect information about payments you make (including billing, shipping, and contact details). Finally, the social media giant collects data from third-party partners, other Facebook companies (like Instagram and Whatsapp), and from websites and apps that use their services (websites that offer “Like” buttons and use Facebook Log In).
The result? Facebook will now know where you live, work, and travel, what and where you shop, whom you are with, and roughly what your purchasing power is. It will have more information than anyone in your life about your habits, likes and dislikes, political inclinations, concerns, and, depending on the kind of use you make of the Internet, it might come to know about such sensitive issues as medical conditions and sexual preferences.
To Facebook’s credit, their new terms of services, although ambiguous, are clearer than most terms of services one finds on the Internet. Despite the intrusiveness of the privacy policy, one may look benevolently on Facebook: if their terms are comparatively explicit and clear, if users know about them and give their consent, and if in turn the company provides a valuable free service to more than a billion users, why should the new privacy policy be frowned upon? After all, if people don’t like the new terms, they are not forced to use Facebook: they are free not to sign up or they can delete their account if they are current users.
A closer look, however, might reveal the matter in a different light. Continue reading
7 reasons not to feel bad about yourself when you have acted immorally
Feeling bad about oneself is a common response to realising that one has acted wrongly, or that one could have done something morally better. It is a reaction that is at least partly inspired by a cultural background that Western civilisation has been carrying on its back for centuries. But contrary to appearances and folk beliefs, not only does our tendency to feel guilty fail to promote morality, it can also be an obstacle to moral behaviour.
Recent Comments