The Clickbait Candidate

By James Williams (@WilliamsJames_)
Note: This is a cross-post with Quillette magazine.

While ‘interrobang’ sounds like a technique Donald Trump might add to the Guantanamo Bay playbook, it in fact refers to a punctuation mark: a disused mashup of interrogation and exclamation that indicates shock, surprise, excitement, or disbelief. It looks like this: ‽ (a rectangle means your font doesn’t support the symbol). In view of how challenging it seems for anyone to articulate the fundamental weirdness of Trump’s proximity to the office of President of the United States, I propose that we resuscitate the interrobang, because our normal orthographic tools clearly are not up to the task.

Yet even more interrobang-able than the prospect of a Trump presidency is the fact that those opposing his candidacy seem to have almost no understanding of the media dynamics that have enabled it to rise and thrive. Trump is perhaps the most straightforward embodiment of the dynamics of the so-called ‘attention economy’—the pervasive, all-out war over our attention in which all of our media have now been conscripted—that the world has yet seen. He is one of the geniuses of our time in the art of attentional manipulation.

If we ever hope to have a societal conversation about the design ethics of the attention economy—especially the ways in which it incentivizes technology design to push certain buttons in our brains that are incompatible with the assumptions of democracy—now would be the time. Continue reading

The Chinese pleasure room: ethics of technologically mediated interaction

The author of the webcomic Left Over Soup proposed a sexual equivalent (or parody?) of Searle’s Chinese Room argument, posing some interesting questions about what it means to have sex, consent and relationships if there is technological mediation:

Continue reading

Hide your face?

A start-up claims it can identify whether a face belongs to a high-IQ person, a good poker player, a terrorist, or a pedophile. Faception uses machine-learning to generate classifiers that signal whether a face belongs in one category or not. Basically facial appearance is used to predict personality traits, type, or behaviors. The company claims to already have sold technology to a homeland security agency to help identify terrorists. It does not surprise me at all: governments are willing to buy remarkably bad snake-oil. But even if the technology did work, it would be ethically problematic.

Continue reading

Guest Post: Does Humanity Want Computers Making Moral Decisions?

Albert Barqué-Duran
Department of Psychology

A runaway trolley is approaching a fork in the tracks. If the trolley is allowed to run on its current track, a work crew of five will be killed. If the driver steers the train down the other branch, a lone worker will be killed. If you were driving this trolley what would you do? What would a computer or robot driving this trolley do? Autonomous systems are coming whether people like it or not. Will they be ethical? Will they be good? And what do we mean by “good”?

Many agree that artificial moral agents are necessary and inevitable. Others say that the idea of artificial moral agents intensifies their distress with cutting edge technology. There is something paradoxical in the idea that one could relieve the anxiety created by sophisticated technology with even more sophisticated technology. A tension exists between the fascination with technology and the anxiety it provokes. This anxiety could be explained by (1) all the usual futurist fears about technology on a trajectory beyond human control and (2) worries about what this technology might reveal about human beings themselves. The question is not what will technology be like in the future, but rather, what will we be like, what are we becoming as we forge increasingly intimate relationships with our machines. What will be the human consequences of attempting to mechanize moral decision-making?

Continue reading

Virtually reality? The value of virtual activities and remote interaction

By Hannah Maslen    

The Oxford Martin School recently held a two-day symposium on virtual reality and immersive technologies. The aim was to examine a range of technologies, from online games to telepresence via a robot avatar, to consider the ways in which such technologies might affect our personal lives and our interactions with others.

These sorts of technologies reignite traditional philosophical debates concerning the value of different experiences – could a virtual trip to Rome ever be as valuable (objectively or subjectively) as a real trip to Rome? – and conceptual questions about whether certain virtual activities, say, ‘having a party’ or ‘attending a concert’, can ever really be the activity that the virtual environment is designed to simulate. The prospect of robotic telepresence presents particular ethical challenges pertaining to moral responsibility for action at a distance and ethical norms governing virtual acts.

In what follows, I introduce and discuss the concern that virtual experiences and activities are to some extent deficient in value, especially where this relates to the formation and maintenance of close personal relationships. Continue reading

What’s the moral difference between ad blocking and piracy?

On 16 September Marco Arment, developer of Tumblr, Instapaper and Overcast, released a new iPhone and iPad app called Peace. It quickly shot to the top of the paid app charts, but Arment began to have moral qualms about the app, and its unexpected success, and two days after its release, he pulled it from the app store.

Why the qualms? For the full story, check out episode 136 of Arment’s excellent Accidental Tech Podcast and this blog post, but here’s my potted account: Peace is an ad blocker. It allows users to view webpages without advertisements. Similar software has been available for Macs and PCs for years (I use it to block some ads on my laptop), but Apple has only just made ad blockers possible on mobile devices, and Peace was one of a bunch of new apps to take advantage of this possibility. Although ad blockers help web surfers to avoid the considerable annoyance (and aesthetic unpleasantness) of webpage ads, they also come at a cost to content providers, potentially reducing their advertising revenue. According to Arment, the ethics of ad blocking is ‘complicated’, and although he still believes ad blockers should exist, and continues to use them, he thinks their downsides are serious enough that he wasn’t comfortable with being at the forefront of the ad blocking movement himself.

In explaining his reasons for withdrawing the app, Arment drew a parallel between ad blocking and piracy. He doesn’t claim that the analogy is perfect (in fact, he explicitly disavows this), and nor does he take it to be a knock-down objection to ad-blocking (presumably he believes that piracy is also morally complicated). But he does think there’s something to the comparison.

Like Arment, I think there are considerable moral similarities between ad blocking and piracy. But, also like Arment, ad blocking seems to me, intuitively, to be somewhat less morally problematic. This raises an obvious question: what’s the moral difference?

Continue reading

Don’t write evil algorithms

Google is said to have dropped the famous “Don’t be evil” slogan. Actually, it is the holding company Alphabet that merely wants employees to “do the right thing”. Regardless of what one thinks about the actual behaviour and ethics of Google, it seems that it got one thing right early on: a recognition that it was moving in a morally charged space.

Google is in many ways an algorithm company: it was founded on PageRank, a clever algorithm for finding relevant web pages, scaled up thanks to MapReduce algorithms, use algorithms for choosing adverts, driving cars and selecting nuances of blue. These algorithms have large real world effects, and the way they function and are used matters morally.

Can we make and use algorithms more ethically?

Continue reading

ASMR and Absurdity

by Hannah Maslen and Rebecca Roache

In the past five years or so, a new phenomenon has emerged on the internet. ASMR videos allow you to spend around 40 minutes watching someone carefully unpack and repack a box, or listen to a detailed demonstration of ten different notebooks, or observe the careful folding of several napkins. If you think this is something that almost nobody would want to do, think again: a search on the term ‘ASMR’ on YouTube returns over 1.4 million videos, the most popular of which has been viewed 11.7 million times.

What is ASMR?

Autonomous sensory meridian response, or ASMR, is the pseudo-scientific name of a phenomenon that, according to thousands of anecdotal reports, various news reports, and a recently published academic survey, loads of people experience. ASMR refers to a pleasant tingling sensation in response to certain visual and/or auditory stimuli. Common triggers include the kind of close personal attention you get when someone cuts your hair, certain sounds like tapping or brushing, and perhaps most bizarrely of all, observing someone doing something trivial very carefully and diligently.

Continue reading

Usable ethics: user design and ethics

by Anders Sandberg and Ben Levinstein

Over the past week we have been subsumed by the intense, final work phase just before the deadline of a big, complex report. The profanity-density has been high, mostly aimed at Google, Microsoft and Apple. Not all of it was deserved, but it brought home the issue that designing software carries moral implications. Continue reading

What to do with Google—nothing, break it up, nationalise it, turn it into a public utility, treat it as a public space, or something else?

Google has become a service that one cannot go without if one wants to be a well-adapted participant in society. For many, Google is the single most important source of information. Yet people do not have any understanding of the way Google individually curates contents for its users. Its algorithms are secret. For the past year, and as a result of the European Court of Justice’s ruling on the right to be forgotten, Google has been deciding which URLs to delist from its search results on the basis of personal information being “inaccurate, inadequate or no longer relevant.” The search engine has reported that it has received over 250,000 individual requests concerning 1 million URLs in the past year, and that it has delisted around 40% of the URLs that it has reviewed. As was made apparent in a recent open letter from 80 academics urging Google for more transparency, the criteria being used to make these decisions are also secret. We have no idea about what sort of information typically gets delisted, and in what countries. The academics signing the letter point out how Google has been charged with the task of balancing privacy and access to information, thereby shaping public discourse, without facing any kind of public scrutiny. Google rules over us but we have no knowledge of what the rules are.

Continue reading


Subscribe Via Email

Email *