Information Ethics

What’s the moral difference between ad blocking and piracy?

On 16 September Marco Arment, developer of Tumblr, Instapaper and Overcast, released a new iPhone and iPad app called Peace. It quickly shot to the top of the paid app charts, but Arment began to have moral qualms about the app, and its unexpected success, and two days after its release, he pulled it from the app store.

Why the qualms? For the full story, check out episode 136 of Arment’s excellent Accidental Tech Podcast and this blog post, but here’s my potted account: Peace is an ad blocker. It allows users to view webpages without advertisements. Similar software has been available for Macs and PCs for years (I use it to block some ads on my laptop), but Apple has only just made ad blockers possible on mobile devices, and Peace was one of a bunch of new apps to take advantage of this possibility. Although ad blockers help web surfers to avoid the considerable annoyance (and aesthetic unpleasantness) of webpage ads, they also come at a cost to content providers, potentially reducing their advertising revenue. According to Arment, the ethics of ad blocking is ‘complicated’, and although he still believes ad blockers should exist, and continues to use them, he thinks their downsides are serious enough that he wasn’t comfortable with being at the forefront of the ad blocking movement himself.

In explaining his reasons for withdrawing the app, Arment drew a parallel between ad blocking and piracy. He doesn’t claim that the analogy is perfect (in fact, he explicitly disavows this), and nor does he take it to be a knock-down objection to ad-blocking (presumably he believes that piracy is also morally complicated). But he does think there’s something to the comparison.

Like Arment, I think there are considerable moral similarities between ad blocking and piracy. But, also like Arment, ad blocking seems to me, intuitively, to be somewhat less morally problematic. This raises an obvious question: what’s the moral difference?

Continue reading

Don’t write evil algorithms

Google is said to have dropped the famous “Don’t be evil” slogan. Actually, it is the holding company Alphabet that merely wants employees to “do the right thing”. Regardless of what one thinks about the actual behaviour and ethics of Google, it seems that it got one thing right early on: a recognition that it was moving in a morally charged space.

Google is in many ways an algorithm company: it was founded on PageRank, a clever algorithm for finding relevant web pages, scaled up thanks to MapReduce algorithms, use algorithms for choosing adverts, driving cars and selecting nuances of blue. These algorithms have large real world effects, and the way they function and are used matters morally.

Can we make and use algorithms more ethically?

Continue reading

A Code of Conduct for Peer Reviewers in the Humanities and Social Sciences

1. The fact that you disagree with the author’s conclusion is not a reason for advising against publication. Quite the contrary, in fact. You have been selected as a peer reviewer because of your eminence, which means (let’s face it), your conservatism. Accordingly if you think the conclusion is wrong, it is far more likely to generate interest and debate than if you agree with it.

2. A very long review will simply indicate to the editors that you’ve got too much time on your hands. And if you have, that probably indicates that you’re not publishing enough yourself. Accordingly excessive length indicates that you’re not appropriately qualified. Continue reading

What to do with Google—nothing, break it up, nationalise it, turn it into a public utility, treat it as a public space, or something else?

Google has become a service that one cannot go without if one wants to be a well-adapted participant in society. For many, Google is the single most important source of information. Yet people do not have any understanding of the way Google individually curates contents for its users. Its algorithms are secret. For the past year, and as a result of the European Court of Justice’s ruling on the right to be forgotten, Google has been deciding which URLs to delist from its search results on the basis of personal information being “inaccurate, inadequate or no longer relevant.” The search engine has reported that it has received over 250,000 individual requests concerning 1 million URLs in the past year, and that it has delisted around 40% of the URLs that it has reviewed. As was made apparent in a recent open letter from 80 academics urging Google for more transparency, the criteria being used to make these decisions are also secret. We have no idea about what sort of information typically gets delisted, and in what countries. The academics signing the letter point out how Google has been charged with the task of balancing privacy and access to information, thereby shaping public discourse, without facing any kind of public scrutiny. Google rules over us but we have no knowledge of what the rules are.

Continue reading

Oxford Uehiro Prize in Practical Ethics: Giving Ourselves Away, by Callum Hackett

This essay, by Oxford graduate student Callum Hackett, is one of the six shortlisted essays in the graduate category of the inaugural Oxford Uehiro Prize in Practical Ethics.

‘Giving Ourselves Away: online communication alters the self and society’

Invention is a fertile source of new ethical problems because creating new tools creates questions about how they might be used for better or worse. However, while every invention has its unique uses, the questions we must ask of them are often the same. For example, the harnessing of water and steam in the Industrial Revolution raised the same concern as robotics in contemporary manufacturing for how mechanization affects the economic empowerment of the working class. Naturally, there are fewer underlying ethical problems than there are inventions that cluster around them, but here I wish to explore the possibility that the mass adoption of the internet has brought with it a new problem with which we are just starting to engage. Specifically, while the internet poses a series of difficult questions, I will consider the implications of certain characteristics of online communication for the self, society and politics. Continue reading

Humans are un-made by social media

‘Technology has made life different, but not necessarily more stressful’, says a recent article in the New York Times, summarising the findings of a study by researchers at the Pew Research Center and Rutgers University. It is often thought that frequent internet and social media use increases stress. Digital unplugging, along with losing weight and quitting smoking, is seen as a healthy thing to do. But, said the article, we needn’t worry so much. Frequent internet and social media users don’t have higher stress levels than less frequent users, and indeed women who frequently use Twitter, email and photo-sharing apps (and who use these media for life-event sharing more than men – who tend to be less self-disclosing online) scored 21% lower on the stress scale than women who did not.

I suggest that, far from being reassuring, these results are very sinister indeed. They indicate that internet technology (or at least something that has happened to humans at the same time as internet technology has been happening to them) has effected a tectonic transformation in the human constitution. The outsourcing, digitalization and trivializing of our relationships should make us stressed. If it doesn’t, something seriously bad has happened. The stress response enables us to react appropriately to threats. Switch it off, and we’re in danger. Only a damaged immune response fails to kick off when there are bacteria around. A tiger confined in a tiny concrete pen has lost a lot of its tigerishness if it doesn’t pace frustratedly up and down, its cortisol levels through the roof. Continue reading

On Swearing (lecture by Rebecca Roache)

Last Thursday’s Special Ethics Seminar at St Cross College was booked out very quickly, and the audience’s high expectations were fully justified. Rebecca Roache returned from Royal Holloway to Oxford to give a fascinating lecture on the nature and ethics of swearing. Roache has two initial questions: ‘Is there anything wrong with this fucking question?’, and ‘Is this one any f***ing better?’. (Her answers turn out to be, essentially, ‘No’ to both.) Continue reading

Facebook’s new Terms of Service: Choosing between your privacy and your relationships

Facebook has changed its privacy settings this January. For Europeans, the changes have come into effect on January 30, 2015.

Apart from collecting data from your contacts, the information you provide, and from everything you see and do in Facebook, the new data policy enables the Facebook app to use your GPS, Bluetooth, and WiFi signals to track your location at all times. Facebook may also collect information about payments you make (including billing, shipping, and contact details). Finally, the social media giant collects data from third-party partners, other Facebook companies (like Instagram and Whatsapp), and from websites and apps that use their services (websites that offer “Like” buttons and use Facebook Log In).

The result? Facebook will now know where you live, work, and travel, what and where you shop, whom you are with, and roughly what your purchasing power is. It will have more information than anyone in your life about your habits, likes and dislikes, political inclinations, concerns, and, depending on the kind of use you make of the Internet, it might come to know about such sensitive issues as medical conditions and sexual preferences.

To Facebook’s credit, their new terms of services, although ambiguous, are clearer than most terms of services one finds on the Internet. Despite the intrusiveness of the privacy policy, one may look benevolently on Facebook: if their terms are comparatively explicit and clear, if users know about them and give their consent, and if in turn the company provides a valuable free service to more than a billion users, why should the new privacy policy be frowned upon? After all, if people don’t like the new terms, they are not forced to use Facebook: they are free not to sign up or they can delete their account if they are current users.

A closer look, however, might reveal the matter in a different light. Continue reading

Limiting the damage from cultures in collision

A Man in Black has a readable twitter essay about the role of chan culture in gamergate, and how the concepts of identity and debate inside a largish subculture can lead to an amazing uproar when they clash with outside cultures.

A brief recap: the Gamergate Controversy was/is a fierce culture war originating in the video gaming community in August 2014 but soon ensnaring feminists, journalists, webcomics, discussion sites, political pundits, Intel… – essentially anybody touching this tar-baby of controversy, regardless of whether they understood it or not. It has everything: media critique, feminism, sexism, racism, sealioning, cyberbullying, doxing, death threats, wrecked careers: you name it. From an outside perspective it has been a train wreck hard to look away from. Rarely have a debate flared up so quickly, involved so many, and generated so much vituperation. If this is the future of broad debates our civilization is doomed.

This post is not so much about the actual content of the controversy but the point made by A Man in Black: one contributing factor to the disaster has been that a fairly large online subculture has radically divergent standards of debate and identity, and when it got into contact with the larger world chaos erupted. How should we handle this? Continue reading

Twitter, Apps, and Depression

The Samaritans have launched a controversial new app that alerts Twitter users when someone they ‘follow’ on the site tweets something that may indicate suicidal thoughts.

To use the app, named ‘Samaritan Radar’, Twitter members must visit the Samaritans’ website, and choose to activate the app on their device. Having entered one’s twitter details on to the site to authorize the app, Samaritan Radar then scans the Twitter users that one ‘follows’, and uses an algorithm to identify phrases in tweets that suggest that the tweeter may be distressed. For example, the algorithm might identify tweets that involve phrases like “help me”, “I feel so alone” or “nobody cares about me”. If such a tweet is identified, an email will be sent to the user who signed up to Samaritan Radar asking whether the tweet should be a cause for concern; if so, the app will then offer advice on what to do next. Continue reading