Usable ethics: user design and ethics
by Anders Sandberg and Ben Levinstein
Over the past week we have been subsumed by the intense, final work phase just before the deadline of a big, complex report. The profanity-density has been high, mostly aimed at Google, Microsoft and Apple. Not all of it was deserved, but it brought home the issue that designing software carries moral implications. Continue reading
What to do with Google—nothing, break it up, nationalise it, turn it into a public utility, treat it as a public space, or something else?
Google has become a service that one cannot go without if one wants to be a well-adapted participant in society. For many, Google is the single most important source of information. Yet people do not have any understanding of the way Google individually curates contents for its users. Its algorithms are secret. For the past year, and as a result of the European Court of Justice’s ruling on the right to be forgotten, Google has been deciding which URLs to delist from its search results on the basis of personal information being “inaccurate, inadequate or no longer relevant.” The search engine has reported that it has received over 250,000 individual requests concerning 1 million URLs in the past year, and that it has delisted around 40% of the URLs that it has reviewed. As was made apparent in a recent open letter from 80 academics urging Google for more transparency, the criteria being used to make these decisions are also secret. We have no idea about what sort of information typically gets delisted, and in what countries. The academics signing the letter point out how Google has been charged with the task of balancing privacy and access to information, thereby shaping public discourse, without facing any kind of public scrutiny. Google rules over us but we have no knowledge of what the rules are.
Speculating about technology in ethics
Many important discussions in practical ethics necessarily involve a degree of speculation about technology: the identification and analysis of ethical, social and legal issues is most usefully done in advance, to make sure that ethically-informed policy decisions do not lag behind technological development. Correspondingly, a move towards so-called ‘anticipatory ethics’ is often lauded as commendably vigilant, and to a certain extent this is justified. But, obviously, there are limits to how much ethicists – and even scientists, engineers and other innovators – can know about the actual characteristics of a freshly emerging or potential technology – precisely what mechanisms it will employ, what benefits it will confer and what risks it will pose, amongst other things. Quite simply, the less known about the technology, the more speculation has to occur.
In practical ethics discussions, we often find phrases such as ‘In the future there could be a technology that…’ or ‘We can imagine an extension of this technology so that…’, and ethical analysis is then carried out in relation to such prognoses. Sometimes these discussions are conducted with a slight discomfort at the extent to which features of the technological examples are imagined or extrapolated beyond current development – discomfort relating to the ability of ethicists to predict correctly the precise way technology will develop, and corresponding reservation about the value of any conclusions that emerge from discussion of, as yet, merely hypothetical innovation. A degree of hesitation in relation to very far-reaching speculation indeed seems justified. Continue reading
Should we criminalise robotic rape and robotic child sexual abuse? Maybe
Guest Post by John Danaher (@JohnDanaher)
This article is being cross-posted at Philosophical Disquisitions
I recently published an unusual article. At least, I think it is unusual. It imagines a future in which sophisticated sex robots are used to replicate acts of rape and child sexual abuse, and then asks whether such acts should be criminalised. In the article, I try to provide a framework for evaluating the issue, but I do so in what I think is a provocative fashion. I present an argument for thinking that such acts should be criminalised, even if they have no extrinsically harmful effects on others. I know the argument is likely to be unpalatable to some, and I myself balk at its seemingly anti-liberal/anti-libertarian dimensions, but I thought it was sufficiently interesting to be worth spelling out in some detail. Continue reading
Twitter, Apps, and Depression
The Samaritans have launched a controversial new app that alerts Twitter users when someone they ‘follow’ on the site tweets something that may indicate suicidal thoughts.
To use the app, named ‘Samaritan Radar’, Twitter members must visit the Samaritans’ website, and choose to activate the app on their device. Having entered one’s twitter details on to the site to authorize the app, Samaritan Radar then scans the Twitter users that one ‘follows’, and uses an algorithm to identify phrases in tweets that suggest that the tweeter may be distressed. For example, the algorithm might identify tweets that involve phrases like “help me”, “I feel so alone” or “nobody cares about me”. If such a tweet is identified, an email will be sent to the user who signed up to Samaritan Radar asking whether the tweet should be a cause for concern; if so, the app will then offer advice on what to do next. Continue reading
On the ‘right to be forgotten’
This week, a landmark ruling from the European Court of Justice held that a Directive of the European Parliament entailed that Internet search engines could, in some circumstances, be legally required (on request) to remove links to personal data that have become irrelevant or inadequate. The justification underlying this decision has been dubbed the ‘right to be forgotten’.
The ruling came in response to a case in which a Spanish gentleman (I was about to write his name but then realized that to do so would be against the spirit of the ruling) brought a complaint against Google. He objected to the fact that if people searched for his name in Google Search, the list of results displayed links to information about his house being repossessed in recovery of social security debts that he owed. The man requested that Google Spain or Google Inc. be required to remove or conceal the personal data relating to him so that the data no longer appeared in the search results. His principal argument was that the attachment proceedings concerning him had been fully resolved for a number of years and that reference to them was now entirely irrelevant. Continue reading
“Whoa though, does it ever burn” – Why the consumer market for brain stimulation devices will be a good thing, as long as it is regulated
In many places around the world, there are people connecting electrodes to their heads to electrically stimulate their brains. Their intentions are often to boost various aspect of mental performance for skill development, gaming or just to see what happens. With the emergence of a more accessible market for glossy, well-branded brain stimulation devices it is likely that more and more people will consider trying them out.
Transcranial direct current stimulation (tDCS) is a brain stimulation technique which involves passing a small electrical current between two or more electrodes positioned on the left and right side of the scalp. The current excites the neurons, increasing their spontaneous activity. Although the first whole-unit devices are being marketed primarily for gamers, there is a well-established DIY tDCS community, members of which have been using the principles of tDCS to experiment with home-built devices which they use for purposes ranging from self-treatment of depression to improvement of memory, alertness, motor skills and reaction times.
Until now, non-clinical tDCS has been the preserve of those willing to invest time and nerve into researching which components to buy, how to attach wires to batteries and electrodes to wires, and how best to avoid burnt scalps, headaches, visual disturbances and even passing out. The tDCS Reddit forum currently has 3,763 subscribed readers who swap stories about best techniques, bad experiences and apparent successes. Many seem to be relying on other posters to answer technical questions and to seek reassurance about which side effects are ‘normal’. Worryingly, the answers they receive are often conflicting. Continue reading
Recent Comments