by David Edmonds – twitter @DavidEdmonds100
Since my last blog post, there has been a decision within the BBC “to start to move” to calling ‘Burma, ‘Myanmar’.
Burma has always been an interest of mine because it was the big story in the first few weeks when I began in journalism. Aung San Suu Kyi’s husband (now deceased) lived in Oxford and when the demonstrations broke out in Burma in September 1988 I would deliver news wires to him: in those pre-internet days he had virtually no other means of finding out what was going on. Continue reading
The Prime Minister has declared that Internet service providers should by default block access to pornography, and that some “horrific” internet search terms to be “blacklisted” on the major search engines, not bringing up any search results. The main motivation of the speech appears to be that access to pornography is “corroding childhood” by having children inadvertently seeing images or visiting websites their parents do not want them to see. There is no shortage of critics, both anti-censorship groups, anti surveillance groups, technology groups and people concerned with actual harm-reduction. There are two central problems: defining pornography, and finding its harms. Continue reading
How do you want to die? Quickly, painlessly, peacefully lying in your own bed?
Most people say that. But then, people seem to cling to their lives, even if that could mean a less peaceful end. When asked whether they would want physicians to perform certain interventions to prolong their lives like CPR (cardiopulmonary resuscitation) or mechanical ventilation (‘breathing machine’), people say ‘yes’.
Interestingly, a study discussed in a Radiolab podcast from earlier this year reveals that contrary to lay people, physicians do not want these life-saving interventions they perform on their patients performed on themselves. Continue reading
Direct to consumer genetic testing is growing rapidly; 23andMe has hired Andy Page to help the company scale – especially since it aims at having one million members by the end at the year (currently, since its launch, 23andMe has tested over 180,000 people around the world). While most ethics discussion about personal genomics has focused on the impact on individuals (is the risk of misunderstanding or bad news so bad that people need to be forced to go via medical gatekeepers or genetics counsellors? is there a risk of ‘genomization’ of everyday health? and so on), the sheer number of tested people and their ability to compare results can result in interesting new ethical problems, as a friend found out.
Follow Rebecca on Twitter
Scientific discoveries about how our behaviour is causally influenced often prompt the question of whether we have free will (for a general discussion, see here). This month, for example, the psychologist and criminologist Adrian Raine has been promoting his new book, The Anatomy of Violence, in which he argues that there are neuroscientific explanations of the behaviour of violent criminals. He argues that these explanations might be taken into account during sentencing, since they show that such criminals cannot control their violent behaviour to the same extent that (relatively) non-violent people can, and therefore that these criminals have reduced moral responsibility for their crimes. Our criminal justice system, along with our conceptions of praise and blame, and moral responsibility more generally, all presuppose that we have free will. If science can reveal it to be an illusion, some of the most fundamental features of our society are undermined.
The questions of exactly what free will is, and whether and how it can accommodate scientific discoveries about the causes of our behaviour, are primarily theoretical philosophical questions. Questions of theoretical philosophy—for example, those relating to metaphysics, epistemology, and philosophy of mind and language—are rarely viewed as highly relevant to people’s day-to-day lives (unlike questions of practical philosophy, such as those relating to ethics and morality). However, it turns out that the beliefs that people hold about free will are relevant. In the last five years, empirical evidence has linked reduced belief in free will with an increased willingness to cheat,1 increased aggression and reduced helpfulness,2 and reduced job performance.3 Even the way that the brain prepares for action differs depending on whether or not one believes in free will.4 If the results of these studies apply at a societal level, we should be very concerned about promoting the view that we do not have free will. But what can we do about it? Continue reading
Yesterday, Charles Foster discussed the recent study showing that Facebook ‘Likes’ can be plugged into an algorithm to predict things about people – things about their demographics, their habits and their personalities – that they didn’t explicitly disclose. Charles argued that, even though the individual ‘Likes’ were voluntarily published, to use an algorithm to generate further predictions would be unethical on the grounds that individuals have not consented to it and, consequently, that to go ahead and do it anyway is a violation of their privacy.
I wish to make three points contesting his strong conclusion, instead offering a more qualified position: simply running the algorithm on publically available ‘Likes’ data is not unethical, even if no consent has been given. Doing particular things based on the output of the algorithm, however, might be. Continue reading
By Charles Foster
When you click ‘Like’ on Facebook, you’re giving away a lot more than you might think. Your ‘Likes’ can be assembled by an algorithm into a terrifyingly accurate portrait.
Here are the chances of an accurate prediction: Single v in a relationship: 67%; Parents still together when you were 21: 60%; Cigarette smoking: 73%; Alcohol drinking: 70%; Drug-using: 65%; Caucasian v African American: 95%; Christianity v Islam: 82%; Democrat v Republican: 85%; Male homosexuality: 88%; Female homosexuality: 75%; Gender: 93%. Continue reading
Sabrina Stewart is a student at Dartmouth College who is visiting the Uehiro Centre this term.
Newspaper health sections yield many headlines and subsequent articles that do not accurately reflect the research publication that is being reported. One article, “Boozing after a heart attack could help you live longer, research reveals” discusses the finding that drinking after a heart attack is beneficial. The headline is at best misleading, and at worse deceptive: the article fails to report the specific frequency of consumption required to derive the stated benefits, the fact that the benefits would depend on the severity of the myocardial infarction, and that any benefit would be lost by intermittent binge drinking. The publication was significant as it was a large-scale study that complemented previous findings, and could therefore be expected to have an effect on people’s health decisions.
This article was taken from the Metro, a free newspaper distributed in London and the South-East of England targeted at commuters. The self-reported estimated readership is just under two million people. If this figure is accurate, The Metro has the third largest newspaper audience in the United Kingdom, after the Sun and the Daily Mail. This capacity to influence such a significant audience comes with responsibility.
There are various Codes of Practice governing the actions of researchers and doctors to ensure unbiased and truthful information is provided to patients and clinical trial participants in order to obtain informed consent. Why is health reporting not subject to the same strict regulation when it carries similar implications for shaping people’s choices regarding their well-being?
Andrew Hessel, Marc Goodman and Steven Kotler sketches in an article in The Atlantic a not-too-far future when the combination of cheap bioengineering, synthetic biology and crowdsourcing of problem solving allows not just personalised medicine, but also personalised biowarfare. They dramatize it by showing how this could be used to attack the US president, but that is mostly for effect: this kind of technology could in principle be targeted at anyone or any group as long as there existed someone who had a reason to use it and the resources to pay for it. The Secret Service looks like it is aware of the problem and does its best to swipe away traces of the President, but it is hard to imagine this to be perfect, doable for old DNA left behind years ago, or applied by all potential targets. In fact, it looks like the US government is keen on collecting not just biometric data, but DNA from foreign potentates. They might be friends right now, but who knows in ten years…