Google is said to have dropped the famous “Don’t be evil” slogan. Actually, it is the holding company Alphabet that merely wants employees to “do the right thing”. Regardless of what one thinks about the actual behaviour and ethics of Google, it seems that it got one thing right early on: a recognition that it was moving in a morally charged space.
Google is in many ways an algorithm company: it was founded on PageRank, a clever algorithm for finding relevant web pages, scaled up thanks to MapReduce algorithms, use algorithms for choosing adverts, driving cars and selecting nuances of blue. These algorithms have large real world effects, and the way they function and are used matters morally.
Can we make and use algorithms more ethically?
1. The fact that you disagree with the author’s conclusion is not a reason for advising against publication. Quite the contrary, in fact. You have been selected as a peer reviewer because of your eminence, which means (let’s face it), your conservatism. Accordingly if you think the conclusion is wrong, it is far more likely to generate interest and debate than if you agree with it.
2. A very long review will simply indicate to the editors that you’ve got too much time on your hands. And if you have, that probably indicates that you’re not publishing enough yourself. Accordingly excessive length indicates that you’re not appropriately qualified. Continue reading
What to do with Google—nothing, break it up, nationalise it, turn it into a public utility, treat it as a public space, or something else?
Google has become a service that one cannot go without if one wants to be a well-adapted participant in society. For many, Google is the single most important source of information. Yet people do not have any understanding of the way Google individually curates contents for its users. Its algorithms are secret. For the past year, and as a result of the European Court of Justice’s ruling on the right to be forgotten, Google has been deciding which URLs to delist from its search results on the basis of personal information being “inaccurate, inadequate or no longer relevant.” The search engine has reported that it has received over 250,000 individual requests concerning 1 million URLs in the past year, and that it has delisted around 40% of the URLs that it has reviewed. As was made apparent in a recent open letter from 80 academics urging Google for more transparency, the criteria being used to make these decisions are also secret. We have no idea about what sort of information typically gets delisted, and in what countries. The academics signing the letter point out how Google has been charged with the task of balancing privacy and access to information, thereby shaping public discourse, without facing any kind of public scrutiny. Google rules over us but we have no knowledge of what the rules are.
This essay, by Oxford graduate student Callum Hackett, is one of the six shortlisted essays in the graduate category of the inaugural Oxford Uehiro Prize in Practical Ethics.
‘Giving Ourselves Away: online communication alters the self and society’
Invention is a fertile source of new ethical problems because creating new tools creates questions about how they might be used for better or worse. However, while every invention has its unique uses, the questions we must ask of them are often the same. For example, the harnessing of water and steam in the Industrial Revolution raised the same concern as robotics in contemporary manufacturing for how mechanization affects the economic empowerment of the working class. Naturally, there are fewer underlying ethical problems than there are inventions that cluster around them, but here I wish to explore the possibility that the mass adoption of the internet has brought with it a new problem with which we are just starting to engage. Specifically, while the internet poses a series of difficult questions, I will consider the implications of certain characteristics of online communication for the self, society and politics. Continue reading
Last Thursday’s Special Ethics Seminar at St Cross College was booked out very quickly, and the audience’s high expectations were fully justified. Rebecca Roache returned from Royal Holloway to Oxford to give a fascinating lecture on the nature and ethics of swearing. Roache has two initial questions: ‘Is there anything wrong with this fucking question?’, and ‘Is this one any f***ing better?’. (Her answers turn out to be, essentially, ‘No’ to both.) Continue reading
Facebook has changed its privacy settings this January. For Europeans, the changes have come into effect on January 30, 2015.
Apart from collecting data from your contacts, the information you provide, and from everything you see and do in Facebook, the new data policy enables the Facebook app to use your GPS, Bluetooth, and WiFi signals to track your location at all times. Facebook may also collect information about payments you make (including billing, shipping, and contact details). Finally, the social media giant collects data from third-party partners, other Facebook companies (like Instagram and Whatsapp), and from websites and apps that use their services (websites that offer “Like” buttons and use Facebook Log In).
The result? Facebook will now know where you live, work, and travel, what and where you shop, whom you are with, and roughly what your purchasing power is. It will have more information than anyone in your life about your habits, likes and dislikes, political inclinations, concerns, and, depending on the kind of use you make of the Internet, it might come to know about such sensitive issues as medical conditions and sexual preferences.
A closer look, however, might reveal the matter in a different light. Continue reading
The Samaritans have launched a controversial new app that alerts Twitter users when someone they ‘follow’ on the site tweets something that may indicate suicidal thoughts.
To use the app, named ‘Samaritan Radar’, Twitter members must visit the Samaritans’ website, and choose to activate the app on their device. Having entered one’s twitter details on to the site to authorize the app, Samaritan Radar then scans the Twitter users that one ‘follows’, and uses an algorithm to identify phrases in tweets that suggest that the tweeter may be distressed. For example, the algorithm might identify tweets that involve phrases like “help me”, “I feel so alone” or “nobody cares about me”. If such a tweet is identified, an email will be sent to the user who signed up to Samaritan Radar asking whether the tweet should be a cause for concern; if so, the app will then offer advice on what to do next. Continue reading