Skip to content

Web/Tech

What to do with Google—nothing, break it up, nationalise it, turn it into a public utility, treat it as a public space, or something else?

Google has become a service that one cannot go without if one wants to be a well-adapted participant in society. For many, Google is the single most important source of information. Yet people do not have any understanding of the way Google individually curates contents for its users. Its algorithms are secret. For the past year, and as a result of the European Court of Justice’s ruling on the right to be forgotten, Google has been deciding which URLs to delist from its search results on the basis of personal information being “inaccurate, inadequate or no longer relevant.” The search engine has reported that it has received over 250,000 individual requests concerning 1 million URLs in the past year, and that it has delisted around 40% of the URLs that it has reviewed. As was made apparent in a recent open letter from 80 academics urging Google for more transparency, the criteria being used to make these decisions are also secret. We have no idea about what sort of information typically gets delisted, and in what countries. The academics signing the letter point out how Google has been charged with the task of balancing privacy and access to information, thereby shaping public discourse, without facing any kind of public scrutiny. Google rules over us but we have no knowledge of what the rules are.

Read More »What to do with Google—nothing, break it up, nationalise it, turn it into a public utility, treat it as a public space, or something else?

Speculating about technology in ethics

Many important discussions in practical ethics necessarily involve a degree of speculation about technology: the identification and analysis of ethical, social and legal issues is most usefully done in advance, to make sure that ethically-informed policy decisions do not lag behind technological development. Correspondingly, a move towards so-called ‘anticipatory ethics’ is often lauded as commendably vigilant, and to a certain extent this is justified. But, obviously, there are limits to how much ethicists – and even scientists, engineers and other innovators – can know about the actual characteristics of a freshly emerging or potential technology – precisely what mechanisms it will employ, what benefits it will confer and what risks it will pose, amongst other things. Quite simply, the less known about the technology, the more speculation has to occur.

In practical ethics discussions, we often find phrases such as ‘In the future there could be a technology that…’ or ‘We can imagine an extension of this technology so that…’, and ethical analysis is then carried out in relation to such prognoses. Sometimes these discussions are conducted with a slight discomfort at the extent to which features of the technological examples are imagined or extrapolated beyond current development – discomfort relating to the ability of ethicists to predict correctly the precise way technology will develop, and corresponding reservation about the value of any conclusions that emerge from discussion of, as yet, merely hypothetical innovation. A degree of hesitation in relation to very far-reaching speculation indeed seems justified.Read More »Speculating about technology in ethics

Humans are un-made by social media

‘Technology has made life different, but not necessarily more stressful’, says a recent article in the New York Times, summarising the findings of a study by researchers at the Pew Research Center and Rutgers University. It is often thought that frequent internet and social media use increases stress. Digital unplugging, along with losing weight and quitting smoking, is seen as a healthy thing to do. But, said the article, we needn’t worry so much. Frequent internet and social media users don’t have higher stress levels than less frequent users, and indeed women who frequently use Twitter, email and photo-sharing apps (and who use these media for life-event sharing more than men – who tend to be less self-disclosing online) scored 21% lower on the stress scale than women who did not.

I suggest that, far from being reassuring, these results are very sinister indeed. They indicate that internet technology (or at least something that has happened to humans at the same time as internet technology has been happening to them) has effected a tectonic transformation in the human constitution. The outsourcing, digitalization and trivializing of our relationships should make us stressed. If it doesn’t, something seriously bad has happened. The stress response enables us to react appropriately to threats. Switch it off, and we’re in danger. Only a damaged immune response fails to kick off when there are bacteria around. A tiger confined in a tiny concrete pen has lost a lot of its tigerishness if it doesn’t pace frustratedly up and down, its cortisol levels through the roof.Read More »Humans are un-made by social media

Should we criminalise robotic rape and robotic child sexual abuse? Maybe

Guest Post by John Danaher (@JohnDanaher)

This article is being cross-posted at Philosophical Disquisitions

I recently published an unusual article. At least, I think it is unusual. It imagines a future in which sophisticated sex robots are used to replicate acts of rape and child sexual abuse, and then asks whether such acts should be criminalised. In the article, I try to provide a framework for evaluating the issue, but I do so in what I think is a provocative fashion. I present an argument for thinking that such acts should be criminalised, even if they have no extrinsically harmful effects on others. I know the argument is likely to be unpalatable to some, and I myself balk at its seemingly anti-liberal/anti-libertarian dimensions, but I thought it was sufficiently interesting to be worth spelling out in some detail.Read More »Should we criminalise robotic rape and robotic child sexual abuse? Maybe

Limiting the damage from cultures in collision

A Man in Black has a readable twitter essay about the role of chan culture in gamergate, and how the concepts of identity and debate inside a largish subculture can lead to an amazing uproar when they clash with outside cultures.

A brief recap: the Gamergate Controversy was/is a fierce culture war originating in the video gaming community in August 2014 but soon ensnaring feminists, journalists, webcomics, discussion sites, political pundits, Intel… – essentially anybody touching this tar-baby of controversy, regardless of whether they understood it or not. It has everything: media critique, feminism, sexism, racism, sealioning, cyberbullying, doxing, death threats, wrecked careers: you name it. From an outside perspective it has been a train wreck hard to look away from. Rarely have a debate flared up so quickly, involved so many, and generated so much vituperation. If this is the future of broad debates our civilization is doomed.

This post is not so much about the actual content of the controversy but the point made by A Man in Black: one contributing factor to the disaster has been that a fairly large online subculture has radically divergent standards of debate and identity, and when it got into contact with the larger world chaos erupted. How should we handle this?Read More »Limiting the damage from cultures in collision

Twitter, Apps, and Depression

The Samaritans have launched a controversial new app that alerts Twitter users when someone they ‘follow’ on the site tweets something that may indicate suicidal thoughts.

To use the app, named ‘Samaritan Radar’, Twitter members must visit the Samaritans’ website, and choose to activate the app on their device. Having entered one’s twitter details on to the site to authorize the app, Samaritan Radar then scans the Twitter users that one ‘follows’, and uses an algorithm to identify phrases in tweets that suggest that the tweeter may be distressed. For example, the algorithm might identify tweets that involve phrases like “help me”, “I feel so alone” or “nobody cares about me”. If such a tweet is identified, an email will be sent to the user who signed up to Samaritan Radar asking whether the tweet should be a cause for concern; if so, the app will then offer advice on what to do next.Read More »Twitter, Apps, and Depression

On the ‘right to be forgotten’

This week, a landmark ruling from the European Court of Justice held that a Directive of the European Parliament entailed that Internet search engines could, in some circumstances, be legally required (on request) to remove links to personal data that have become irrelevant or inadequate. The justification underlying this decision has been dubbed the ‘right to be forgotten’.

The ruling came in response to a case in which a Spanish gentleman (I was about to write his name but then realized that to do so would be against the spirit of the ruling) brought a complaint against Google. He objected to the fact that if people searched for his name in Google Search, the list of results displayed links to information about his house being repossessed in recovery of social security debts that he owed. The man requested that Google Spain or Google Inc. be required to remove or conceal the personal data relating to him so that the data no longer appeared in the search results. His principal argument was that the attachment proceedings concerning him had been fully resolved for a number of years and that reference to them was now entirely irrelevant.Read More »On the ‘right to be forgotten’

“Whoa though, does it ever burn” – Why the consumer market for brain stimulation devices will be a good thing, as long as it is regulated

In many places around the world, there are people connecting electrodes to their heads to electrically stimulate their brains. Their intentions are often to boost various aspect of mental performance for skill development, gaming or just to see what happens. With the emergence of a more accessible market for glossy, well-branded brain stimulation devices it is likely that more and more people will consider trying them out.

Transcranial direct current stimulation (tDCS) is a brain stimulation technique which involves passing a small electrical current between two or more electrodes positioned on the left and right side of the scalp. The current excites the neurons, increasing their spontaneous activity. Although the first whole-unit devices are being marketed primarily for gamers, there is a well-established DIY tDCS community, members of which have been using the principles of tDCS to experiment with home-built devices which they use for purposes ranging from self-treatment of depression to improvement of memory, alertness, motor skills and reaction times.

Until now, non-clinical tDCS has been the preserve of those willing to invest time and nerve into researching which components to buy, how to attach wires to batteries and electrodes to wires, and how best to avoid burnt scalps, headaches, visual disturbances and even passing out. The tDCS Reddit forum currently has 3,763 subscribed readers who swap stories about best techniques, bad experiences and apparent successes. Many seem to be relying on other posters to answer technical questions and to seek reassurance about which side effects are ‘normal’. Worryingly, the answers they receive are often conflicting.Read More »“Whoa though, does it ever burn” – Why the consumer market for brain stimulation devices will be a good thing, as long as it is regulated