Virtually reality? The value of virtual activities and remote interaction

By Hannah Maslen    

The Oxford Martin School recently held a two-day symposium on virtual reality and immersive technologies. The aim was to examine a range of technologies, from online games to telepresence via a robot avatar, to consider the ways in which such technologies might affect our personal lives and our interactions with others.

These sorts of technologies reignite traditional philosophical debates concerning the value of different experiences – could a virtual trip to Rome ever be as valuable (objectively or subjectively) as a real trip to Rome? – and conceptual questions about whether certain virtual activities, say, ‘having a party’ or ‘attending a concert’, can ever really be the activity that the virtual environment is designed to simulate. The prospect of robotic telepresence presents particular ethical challenges pertaining to moral responsibility for action at a distance and ethical norms governing virtual acts.

In what follows, I introduce and discuss the concern that virtual experiences and activities are to some extent deficient in value, especially where this relates to the formation and maintenance of close personal relationships. Continue reading

What’s the moral difference between ad blocking and piracy?

On 16 September Marco Arment, developer of Tumblr, Instapaper and Overcast, released a new iPhone and iPad app called Peace. It quickly shot to the top of the paid app charts, but Arment began to have moral qualms about the app, and its unexpected success, and two days after its release, he pulled it from the app store.

Why the qualms? For the full story, check out episode 136 of Arment’s excellent Accidental Tech Podcast and this blog post, but here’s my potted account: Peace is an ad blocker. It allows users to view webpages without advertisements. Similar software has been available for Macs and PCs for years (I use it to block some ads on my laptop), but Apple has only just made ad blockers possible on mobile devices, and Peace was one of a bunch of new apps to take advantage of this possibility. Although ad blockers help web surfers to avoid the considerable annoyance (and aesthetic unpleasantness) of webpage ads, they also come at a cost to content providers, potentially reducing their advertising revenue. According to Arment, the ethics of ad blocking is ‘complicated’, and although he still believes ad blockers should exist, and continues to use them, he thinks their downsides are serious enough that he wasn’t comfortable with being at the forefront of the ad blocking movement himself.

In explaining his reasons for withdrawing the app, Arment drew a parallel between ad blocking and piracy. He doesn’t claim that the analogy is perfect (in fact, he explicitly disavows this), and nor does he take it to be a knock-down objection to ad-blocking (presumably he believes that piracy is also morally complicated). But he does think there’s something to the comparison.

Like Arment, I think there are considerable moral similarities between ad blocking and piracy. But, also like Arment, ad blocking seems to me, intuitively, to be somewhat less morally problematic. This raises an obvious question: what’s the moral difference?

Continue reading

Don’t write evil algorithms

Google is said to have dropped the famous “Don’t be evil” slogan. Actually, it is the holding company Alphabet that merely wants employees to “do the right thing”. Regardless of what one thinks about the actual behaviour and ethics of Google, it seems that it got one thing right early on: a recognition that it was moving in a morally charged space.

Google is in many ways an algorithm company: it was founded on PageRank, a clever algorithm for finding relevant web pages, scaled up thanks to MapReduce algorithms, use algorithms for choosing adverts, driving cars and selecting nuances of blue. These algorithms have large real world effects, and the way they function and are used matters morally.

Can we make and use algorithms more ethically?

Continue reading

ASMR and Absurdity

by Hannah Maslen and Rebecca Roache

In the past five years or so, a new phenomenon has emerged on the internet. ASMR videos allow you to spend around 40 minutes watching someone carefully unpack and repack a box, or listen to a detailed demonstration of ten different notebooks, or observe the careful folding of several napkins. If you think this is something that almost nobody would want to do, think again: a search on the term ‘ASMR’ on YouTube returns over 1.4 million videos, the most popular of which has been viewed 11.7 million times.

What is ASMR?

Autonomous sensory meridian response, or ASMR, is the pseudo-scientific name of a phenomenon that, according to thousands of anecdotal reports, various news reports, and a recently published academic survey, loads of people experience. ASMR refers to a pleasant tingling sensation in response to certain visual and/or auditory stimuli. Common triggers include the kind of close personal attention you get when someone cuts your hair, certain sounds like tapping or brushing, and perhaps most bizarrely of all, observing someone doing something trivial very carefully and diligently.

Continue reading

Usable ethics: user design and ethics

by Anders Sandberg and Ben Levinstein

Over the past week we have been subsumed by the intense, final work phase just before the deadline of a big, complex report. The profanity-density has been high, mostly aimed at Google, Microsoft and Apple. Not all of it was deserved, but it brought home the issue that designing software carries moral implications. Continue reading

What to do with Google—nothing, break it up, nationalise it, turn it into a public utility, treat it as a public space, or something else?

Google has become a service that one cannot go without if one wants to be a well-adapted participant in society. For many, Google is the single most important source of information. Yet people do not have any understanding of the way Google individually curates contents for its users. Its algorithms are secret. For the past year, and as a result of the European Court of Justice’s ruling on the right to be forgotten, Google has been deciding which URLs to delist from its search results on the basis of personal information being “inaccurate, inadequate or no longer relevant.” The search engine has reported that it has received over 250,000 individual requests concerning 1 million URLs in the past year, and that it has delisted around 40% of the URLs that it has reviewed. As was made apparent in a recent open letter from 80 academics urging Google for more transparency, the criteria being used to make these decisions are also secret. We have no idea about what sort of information typically gets delisted, and in what countries. The academics signing the letter point out how Google has been charged with the task of balancing privacy and access to information, thereby shaping public discourse, without facing any kind of public scrutiny. Google rules over us but we have no knowledge of what the rules are.

Continue reading

Speculating about technology in ethics

Many important discussions in practical ethics necessarily involve a degree of speculation about technology: the identification and analysis of ethical, social and legal issues is most usefully done in advance, to make sure that ethically-informed policy decisions do not lag behind technological development. Correspondingly, a move towards so-called ‘anticipatory ethics’ is often lauded as commendably vigilant, and to a certain extent this is justified. But, obviously, there are limits to how much ethicists – and even scientists, engineers and other innovators – can know about the actual characteristics of a freshly emerging or potential technology – precisely what mechanisms it will employ, what benefits it will confer and what risks it will pose, amongst other things. Quite simply, the less known about the technology, the more speculation has to occur.

In practical ethics discussions, we often find phrases such as ‘In the future there could be a technology that…’ or ‘We can imagine an extension of this technology so that…’, and ethical analysis is then carried out in relation to such prognoses. Sometimes these discussions are conducted with a slight discomfort at the extent to which features of the technological examples are imagined or extrapolated beyond current development – discomfort relating to the ability of ethicists to predict correctly the precise way technology will develop, and corresponding reservation about the value of any conclusions that emerge from discussion of, as yet, merely hypothetical innovation. A degree of hesitation in relation to very far-reaching speculation indeed seems justified. Continue reading

Humans are un-made by social media

‘Technology has made life different, but not necessarily more stressful’, says a recent article in the New York Times, summarising the findings of a study by researchers at the Pew Research Center and Rutgers University. It is often thought that frequent internet and social media use increases stress. Digital unplugging, along with losing weight and quitting smoking, is seen as a healthy thing to do. But, said the article, we needn’t worry so much. Frequent internet and social media users don’t have higher stress levels than less frequent users, and indeed women who frequently use Twitter, email and photo-sharing apps (and who use these media for life-event sharing more than men – who tend to be less self-disclosing online) scored 21% lower on the stress scale than women who did not.

I suggest that, far from being reassuring, these results are very sinister indeed. They indicate that internet technology (or at least something that has happened to humans at the same time as internet technology has been happening to them) has effected a tectonic transformation in the human constitution. The outsourcing, digitalization and trivializing of our relationships should make us stressed. If it doesn’t, something seriously bad has happened. The stress response enables us to react appropriately to threats. Switch it off, and we’re in danger. Only a damaged immune response fails to kick off when there are bacteria around. A tiger confined in a tiny concrete pen has lost a lot of its tigerishness if it doesn’t pace frustratedly up and down, its cortisol levels through the roof. Continue reading

Should we criminalise robotic rape and robotic child sexual abuse? Maybe

Guest Post by John Danaher (@JohnDanaher)

This article is being cross-posted at Philosophical Disquisitions

I recently published an unusual article. At least, I think it is unusual. It imagines a future in which sophisticated sex robots are used to replicate acts of rape and child sexual abuse, and then asks whether such acts should be criminalised. In the article, I try to provide a framework for evaluating the issue, but I do so in what I think is a provocative fashion. I present an argument for thinking that such acts should be criminalised, even if they have no extrinsically harmful effects on others. I know the argument is likely to be unpalatable to some, and I myself balk at its seemingly anti-liberal/anti-libertarian dimensions, but I thought it was sufficiently interesting to be worth spelling out in some detail. Continue reading

Limiting the damage from cultures in collision

A Man in Black has a readable twitter essay about the role of chan culture in gamergate, and how the concepts of identity and debate inside a largish subculture can lead to an amazing uproar when they clash with outside cultures.

A brief recap: the Gamergate Controversy was/is a fierce culture war originating in the video gaming community in August 2014 but soon ensnaring feminists, journalists, webcomics, discussion sites, political pundits, Intel… – essentially anybody touching this tar-baby of controversy, regardless of whether they understood it or not. It has everything: media critique, feminism, sexism, racism, sealioning, cyberbullying, doxing, death threats, wrecked careers: you name it. From an outside perspective it has been a train wreck hard to look away from. Rarely have a debate flared up so quickly, involved so many, and generated so much vituperation. If this is the future of broad debates our civilization is doomed.

This post is not so much about the actual content of the controversy but the point made by A Man in Black: one contributing factor to the disaster has been that a fairly large online subculture has radically divergent standards of debate and identity, and when it got into contact with the larger world chaos erupted. How should we handle this? Continue reading