Web/Tech

Twitter, Apps, and Depression

The Samaritans have launched a controversial new app that alerts Twitter users when someone they ‘follow’ on the site tweets something that may indicate suicidal thoughts.

To use the app, named ‘Samaritan Radar’, Twitter members must visit the Samaritans’ website, and choose to activate the app on their device. Having entered one’s twitter details on to the site to authorize the app, Samaritan Radar then scans the Twitter users that one ‘follows’, and uses an algorithm to identify phrases in tweets that suggest that the tweeter may be distressed. For example, the algorithm might identify tweets that involve phrases like “help me”, “I feel so alone” or “nobody cares about me”. If such a tweet is identified, an email will be sent to the user who signed up to Samaritan Radar asking whether the tweet should be a cause for concern; if so, the app will then offer advice on what to do next. Continue reading

When Cupid fires arrows double-blind: implicit informed agreement for online research?

A while ago Facebook got into the news for experimenting on its subscribers, leading to a fair bit of grumbling. Now the dating site OKCupid has proudly outed itself: We Experiment On Human Beings! Unethical or not?

Continue reading

On the ‘right to be forgotten’

This week, a landmark ruling from the European Court of Justice held that a Directive of the European Parliament entailed that Internet search engines could, in some circumstances, be legally required (on request) to remove links to personal data that have become irrelevant or inadequate. The justification underlying this decision has been dubbed the ‘right to be forgotten’.

The ruling came in response to a case in which a Spanish gentleman (I was about to write his name but then realized that to do so would be against the spirit of the ruling) brought a complaint against Google. He objected to the fact that if people searched for his name in Google Search, the list of results displayed links to information about his house being repossessed in recovery of social security debts that he owed. The man requested that Google Spain or Google Inc. be required to remove or conceal the personal data relating to him so that the data no longer appeared in the search results. His principal argument was that the attachment proceedings concerning him had been fully resolved for a number of years and that reference to them was now entirely irrelevant. Continue reading

“Whoa though, does it ever burn” – Why the consumer market for brain stimulation devices will be a good thing, as long as it is regulated

In many places around the world, there are people connecting electrodes to their heads to electrically stimulate their brains. Their intentions are often to boost various aspect of mental performance for skill development, gaming or just to see what happens. With the emergence of a more accessible market for glossy, well-branded brain stimulation devices it is likely that more and more people will consider trying them out.

Transcranial direct current stimulation (tDCS) is a brain stimulation technique which involves passing a small electrical current between two or more electrodes positioned on the left and right side of the scalp. The current excites the neurons, increasing their spontaneous activity. Although the first whole-unit devices are being marketed primarily for gamers, there is a well-established DIY tDCS community, members of which have been using the principles of tDCS to experiment with home-built devices which they use for purposes ranging from self-treatment of depression to improvement of memory, alertness, motor skills and reaction times.

Until now, non-clinical tDCS has been the preserve of those willing to invest time and nerve into researching which components to buy, how to attach wires to batteries and electrodes to wires, and how best to avoid burnt scalps, headaches, visual disturbances and even passing out. The tDCS Reddit forum currently has 3,763 subscribed readers who swap stories about best techniques, bad experiences and apparent successes. Many seem to be relying on other posters to answer technical questions and to seek reassurance about which side effects are ‘normal’. Worryingly, the answers they receive are often conflicting. Continue reading

Computer vision and emotional privacy

A study published last week (and summarized here and here) demonstrated that a computer could be trained to detect real versus faked facial expressions of pain significantly better than humans. Participants were shown video clips of the faces of people actually in pain (elicited by submerging their arms in icy water) and clips of people simulating pain (with their arms in warm water). The participants had to indicate for each clip whether the expression of pain was genuine or faked.

Whilst human observers could not discriminate real expressions of pain from faked expression better than chance, a computer vision system that automatically measured facial movements and performed pattern recognition on those movements attained 85% accuracy. Even when the human participants practiced, accuracy only increased to 55%.

The authors explain that the system could also be trained to recognize other potentially deceptive actions involving a facial component. They say:

In addition to detecting pain malingering, our computer vision approach maybe used to detect other real-world deceptive actions in the realm of homeland security, psychopathology, job screening, medicine, and law. Like pain, these scenarios also generate strong emotions, along with attempts to minimize, mask, and fake such emotions, which may involve dual control of the face. In addition, our computer vision system can be applied to detect states in which the human face may provide important clues about health, physiology, emotion, or thought, such as drivers’ expressions of sleepiness and students’ expressions of attention and comprehension of lectures, or to track response to treatment of affective disorders.

The possibility of using this technology to detect when someone’s emotional expressions are genuine or not raises interesting ethical questions. I will outline and give preliminary comments on a few of the issues: Continue reading

Innovation’s low-hanging fruits: on the demand or supply sides?

Cross-posted at Less Wrong.

This is an addendum to a previous post, which argued that we may be underestimating the impact of innovation because we have so much of it. I noted that we underestimated the innovative aspect of the CD because many other technologies partially overlapped with it, such as television, radio, cinema, ipod, walkman, landline phone, mobile phone, laptop, VCR and Tivo’s. Without these overlapping technologies, we could see the CD’s true potential and estimate it higher as an innovation. Many different technologies could substitute for each other.

But this argument brings out a salient point: if so many innovations overlap or potentially overlap, then there must be many more innovations that purposes for innovations. Tyler Cowen made the interesting point that the internet isn’t as innovative as the flushing toilet (or indeed the television). He certainly has a point here: imagine society without toilets or youtube, which would be most tolerable (or most survivable)? Continue reading

The innovation tree, overshadowed in the innovation forest

Cross-Posted at Less Wrong.

Many have pronounced that the era of innovation dead, peace be to its soul. From Tyler Cowen’s decree that we’ve picked all the low hanging fruit of innovation, through Robert Gordon’s idea that further innovation growth is threatened by “six headwinds”, to Gary Karparov’s and Peter Thiel’s theory that risk aversion has stifled innovation, there is no lack of predictions about the end of discovery.

I don’t propose to address the issue with something as practical and useful as actual data. Instead, staying true to my philosophical environment, I propose a thought experiment that hopefully may shed some light. The core idea is that we might be underestimating the impact of innovation because we have so much of it.

Imagine that technological innovation had for some reason stopped around the 1945 – with one exception: the CD and CD player/burner. Fast forwards a few decades, and visualise society. We can imagine a society completely dominated by the CD. We’d have all the usual uses for the CD – music, songs and similar – of course, but also much more. Continue reading

Beyond 23andMe’s Shutdown: The Role of the FDA in the Future of Direct-to-Consumer Genetic Testing

Kyle Edwards, Uehiro Centre for Practical Ethics and The Ethox Centre, University of Oxford

Caroline Huang, The Ethox Centre, University of Oxford

An article based on this blog post has now been published in the May – June 2014 Hastings Center Report: http://onlinelibrary.wiley.com/doi/10.1002/hast.310/full. Please check out our more developed thoughts on this topic there!

Secret snakes biting their own tails: secrecy and surveillance

To most people interested in surveillance the latest revelations that the US government has been doing widespread monitoring of its citizens (and the rest of the world), possibly through back-doors into major company services, is merely a chance to smugly say “I told you so“. The technology and legal trends have been clear for a long time. That intelligence agencies share information (allowing them to get around pesky limits on looking at their own citizens) is another yawn.

That does not mean they are unimportant: we are at an important choice-point in regard how to handle mass surveillance. But the battle is not security versus freedom, but secrecy versus openness.

Continue reading

Strict-ish liability? An experiment in the law as algorithm

Some researchers in the US recently conducted an ‘experiment in the law as algorithm’. (One of the researchers involved with the project was interviewed by Ars Technia, here.) At first glance, this seems like quite a simple undertaking for someone with knowledge of a particular law and mathematical proficiency: laws are clearly defined rules, which can be broken in clearly defined ways. This is most true for strict liability offences, which require no proof of a mental element of the offence (the mens rea). An individual can commit a strict liability offence even if she had no knowledge that her act was criminal and had no intention to commit the crime. All that is required under strict liability statutes is that the act itself (the actus reus) is voluntary. Essentially: if you did it, you’re liable – it doesn’t matter why or how. So, for strict liability offences such as speeding it would seem straightforward enough to create an algorithm that could compare actual driving speed with the legal speed limit, and adjudicate liability accordingly.

This possibility of law as algorithm is what the US researchers aimed to test out with their experiment. They imagined the future possibility of automated law enforcement, especially for simple laws like those governing driving. To conduct their experiment, the researchers assigned a group of 52 programmers the task of automating the enforcement of driving speed limits. A late-model vehicle was equipped with a sensor that collected actual vehicle speed over an hour-long commute. The programmers (without collaboration) each wrote a program that computed the number of speed limit violations and issued mock traffic tickets. Continue reading

Recent Comments

Authors

Affiliations