Press Release: British Medical Journal Head to Head: Should athletes be allowed to use performance enhancing drugs?
Stories about illegal doping in sport are a regular occurrence. On bmj.com today, experts debate whether athletes should be allowed to use performance enhancing drugs.
Professor of ethics Julian Savulescu, from the University of Oxford, argues that rather than banning performance enhancing drugs we should regulate their use.
He points out that, since Ben Johnson in 1988, only 10 men have ever run under 9.8 seconds – and only two (including Usain Bolt) are currently untainted by doping. “The zero tolerance ban on doping has failed,” he says. “It is time for a different approach.”
He argues that regulation could improve safety and says “we should assess each substance on an individual basis” and “set enforceable, fair, and safe physiological limits.”
He acknowledges that, if a substance came to dominate or corrupt performance, there would be good reason to ban it. But says, if a substance allows safer, faster recovery from training or injury, “then it does not corrupt sport or remove essential human contribution.”
And he dismisses the argument that allowing elite athletes to take drugs under medical supervision will encourage children and amateurs to imitate their heroes, pointing out that amateur doping “is already happening in an unsupervised manner.”
“Over time the rules of the sport have evolved,” he says. “They must evolve as humans and their technology evolve and the rules begin to create more problems than they solve. It is time to rethink the absolute ban and instead to pick limits that are safe and enforceable.”
But hospital doctors, Leon Creaney and Anna Vondy believe this would lead to escalating use and call for tougher enforcement.
“The argument against doping in sport is moral, not medical,” they write.
“Athletes who wanted to live a healthy existence would be pushed out altogether. Soon, the only competition that would matter would be the one to develop the most powerful drugs, and athletic opponents would enter into an exchange of ever escalating doses to stay ahead of each other.”
They warn that, in some nations, “we might see a return of the state sponsored doping programmes of the 70s and 80s” and say without the anti-doping programme “the use of performance enhancing drugs would expand exponentially and filter deeper into our society.”
Legitimising performance enhancing drugs in elite and professional sport would also change the message sport sends to society, they add. “Would a bioengineered athlete be able to inspire in the same way?”
They dismiss the argument that because we will never be able to catch every cheat, we should give up trying, saying the answer is “to make the anti-doping system more effective.”
If testing was ubiquitous, they say, “it would be virtually impossible to evade detection, and the equilibrium would be reset in favour of not cheating.”
And if a first offence led to a lifetime ban, “the risks involved would become much greater, such that fewer people would take the gamble of getting caught in the first place,” they conclude.
Professor Julian Savulescu, Uehiro Chair in Practical Ethics & Director, Oxford Centre for Neuroethics, University of Oxford, UK
Tel: +44 (0)1865 286 888
Leon Creaney, Trauma and Orthopaedics, University Hospital Birmingham, UK
There has been a recent storm over the DPP’s decision not to prosecute two doctors in relation to their referral of two women for abortion. The cases were widely represented as cases of abortion on grounds of gender. They came to light in the course of an undercover investigation by the Daily Telegraph of practice in English abortion clinics ( see also here and here).
The DPP has published detailed reasons for his decision. They are well worth reading.
An abortion is only lawful if two medical practitioners are of the opinion, held in good faith, that one of the lawful grounds for abortion is made out. One of the grounds (overwhelmingly the commonest, and the one said to be relevant in both of the cases considered by the DPP), is that ‘the pregnancy has not exceeded its 24th week and that the continuance of the pregnancy would involve risk, greater than if the pregnancy were terminated, of injury to the physical or mental health of the pregnant woman or any existing children of her family.’: Abortion Act 1967, s. 1(1)(a).
The Act does not say anywhere that the gender of the fetus is a relevant criterion. But it plainly could be. Take two examples: Continue reading
So the US government is likely being shutdown, which will suspend the work of many government agencies, including the Center for Disease Control (CDC). But, fair citizens, I reassure you – in its wisdom, the US Congress has decided that the military’s salaries will be excluded from the shutdown.
With all due respect to military personnel, this is ludicrous. The US military is by far the world’s largest, there is little likelihood of any major war (the last great power war was in 1953), and no sign of minor wars starting, either. Suspended salaries may be bad for morale and long term retention, but they aren’t going to compromise US military power.
Contrast with the CDC’s work. The world’s deadliest war was the second world war, with 60 million dead, over a period of years (other wars get nowhere close to this). The Spanish flu killed 50-100 million on its own, in a single year. Smallpox couldn’t match that yearly rate, but did polish off 300-500 million of us during the 20th century. Bog standard flu kills between a quarter and a half million every year, and if we wanted to go back further, the Black Death wiped out at least a third of the population of Europe. And let’s not forget HIV with its 30 million deaths to date.
No need to belabour the point… Actually there is: infectious diseases are the greatest killers in human history, bar none. If any point needs belabouring, that’s one. And a shutdown would have an immediate negative impact on public health: for instance, the CDC would halt its influenza monitoring program. Now, of course, this year’s flu may not turn out to be pandemic – we can but hope, because that’s all we can do now! And if we have another SARS starting somewhere in the United States, it will be a real disaster.
We’re closing our eyes and hoping that the greatest killer in human history will be considerate enough to not strike while we sort out our politics.
Last week, the Daily Mail reported on Dr Anna Smajdor’s paper in which she argues that compassion ‘is not a necessary component’ of healthcare. This claim contrasts interestingly with Jeremy Hunt’s recent proposal that all student nurses should have to prove that they are capable of caring by spending a year on wards carrying out basic tasks. This proposal, along with the suggestion that pay be linked to levels of kindness would, according to Hunt, go some way to improving the standard of NHS care. The motivating idea behind Hunt’s proposals is that lack of compassion amongst NHS staff is partly responsible for poor care and, in some cases, for cultivating a ‘culture of cruelty’.
So is compassion a necessary component of healthcare? Is an adequate standard of care necessarily unattainable when compassion amongst staff is absent? In considering these questions I do not intend to embark on a detailed critique of Dr Smajdor’s paper. Instead, I will begin from her main ideas and use them to motivate a general discussion of the role of compassion in healthcare. According to the report, Dr Smajdor argues for two main claims: 1) that compassion is not a necessary component of healthcare – that acceptable standards can be attained without it – and 2) that compassion can actually be dangerous for healthcare workers, possibly resulting in impaired standards of care. Continue reading
By Luke Davies
Luke can now be followed on Twitter.
Anders Breivik, the 34-year-old Norwegian man responsible for the death of 77 and wounding of 232 people in an attack in 2011, has been enrolled in political science modules at the University of Oslo. The attack Breivik carried out, which happened on 22 July 2011, was motivated by a fear of the “Islamisation” of Europe and was meant to defend Norway from immigration and multiculturalism. Despite an initial assessment to the contrary, Breivik was held to be sane at the time of the attack, and therefore capable to stand trial. He was sentenced to 21 years in jail.
While Breivik didn’t meet the formal requirements for entry into a degree-granting program, the university was clear from the start that it would assess his application only on its merits. Continue reading
Chad Dixon, an Indiana man was recently sentenced to 8 months in jail for teaching people how to beat polygraph tests. The sticking point seems to be that polygraphs are used by the US federal authorities for screening applicants and detecting crimes, so if people could get past them they could do all sorts of nefarious things. But the reliability of polygraph tests is highly dubious, and false positives may have stalled many careers. So of course the UK is considering making polygraph testing compulsory for sex offenders, something the blogger Neurobonkers described as a return to trial by ordeal. Is it unethical to teach people to circumvent these tests?
As the US and other nations gear up for war in Syria, the alleged use of chemical weapons by the Assad regime against civilians has received great, perhaps inordinate attention. A little over a year ago, US President Barack Obama called the use of chemical weapons a “red line”, though was vague about what would happen if that line were crossed. And while there were previous allegations of chemical weapons attacks, the most recent accusations concerning an attack in a Damascus suburb that killed hundreds seem to have been taken more seriously and will likely be used as a Causus Belli for air strikes against Assad’s forces in Syria. Yet, some have argued that this focus on chemical weapons use is rather inconsistent. Dominic Tierney at the Atlantic sarcastically comments, “Blowing your people up with high explosives is allowable, as is shooting them, or torturing them. But woe betide the Syrian regime if it even thinks about using chemical weapons!” And Paul Whitefield at the LA Times inquires, “Why is it worse for children to be killed by a chemical weapon than blown apart by an artillery shell?” These writers have a point. But, while it may not be entirely consistent, I will argue that the greater concern over the use of chemical weapons compared with conventional weapons is justified. Continue reading
by Luke Davies
The upcoming Winter Olympics in Sochi has been in the news a lot recently. The controversy, as you will already know, is a result the introduction of another law discriminating against the LGBT community in Russia—Article 6.21 of the Code of the Russian Federation, the so-called “gay propaganda” law.  This law will allow the government to fine anyone who spreads propaganda about “non-traditional sexual relations” to minors. (The meaning of “propaganda” and “nontraditional sexual relations” is left quite ambiguous.) Given the insistence of Sports Minister Vitaly Mutko that competing athletes and visiting spectators must obey the laws of the country, there has been some disagreement about what to do. There are different levels of concern being given priority in the media, some more pertinent from an ethical perspective than others.
Here’s a spoiler: The trivial concerns have to do with the politics of the Olympic Games themselves; the real concern is with the harm to people’s lives in Russia. Continue reading
Here is the sequence of events. 1. Richard Dawkins tweets that all the world’s Muslims have fewer Nobel Prizes than Trinity College Cambridge. 2. Cue a twitter onslaught – accusing Professor Dawkins of racism. 3. Richard Dawkins writes that a fact can’t be racist. Continue reading
Would you hand over a moral decision to a machine? Why not? Moral outsourcing and Artificial Intelligence.
Artificial Intelligence and Human Decision-making.
Recent developments in artificial intelligence are allowing an increasing number of decisions to be passed from human to machine. Most of these to date are operational decisions – such as algorithms on the financial markets deciding what trades to make and how. However, the range of such decisions that can be computerisable are increasing, and as many operational decisions have moral consequences, they could be considered to have a moral component.
One area in which this is causing growing concern is military robotics. The degree of autonomy with which uninhabited aerial vehicles and ground robots are capable of functioning is steadily increasing. There is extensive debate over the circumstances in which robotic systems should be able to operate with a human “in the loop” or “on the loop” – and the circumstances in which a robotic system should be able to operate independently. A coalition of international NGOs recently launched a campaign to “stop killer robots”.
While there have been strong arguments raised against robotic systems being able to use lethal force against human combatants autonomously, it is becoming increasingly clear that in many circumstances in the near future the “no human in the loop” robotic system will have advantages over the “in the loop system”. Automated systems already have better perception and faster reflexes than humans in many ways, and are slowed down by the human input. The human “added value” comes from our judgement and decision-making – but these are by no means infallible, and will not always be superior to the machine’s. In June’s Centre for a New American Society (CNAS) conference, Rosa Brooks (former pentagon official, now Georgetown Law professor) put this in a provocative way:
“Our record- we’re horrible at it [making “who should live and who should die” decisions] … it seems to me that it could very easily turn out to be the case that computers are much better than we are doing. And the real ethical question would be can we ethically and lawfully not let the autonomous machines do that when they’re going to do it better than we will.” (1)
For a non-military example, consider the adaptation of IBM’s Jeopardy-winning “Watson” for use in medicine. As evidenced by IBM’s technical release this week, progress in developing these systems continues apace (shameless plug: Selmer Bringsjord, the AI researcher “putting Watson through college” will speak in Oxford about “Watson 2.0″ next month as part of the Philosophy and Theory of AI conference).
Soon we will have systems that will enter use as doctor’s aides – able to analyse the world’s medical literature to diagnose a medical problem and provide recommendations to the doctor. But it seems likely that a time will come when these thorough analyses produce recommendations that are sometimes at odds with the doctor’s recommendation – but are proven to be more accurate on average. To return to combat, we will have robotic systems that can devise and implement non-intuitive (to human) strategies that involve using lethal force, but achieve a military objective more efficiently with less loss of life. Human judgement added to the loop may prove to be an impairment.
At a recent academic workshop I attended on autonomy in military robotics, a speaker posed a pair of questions to test intuitions on this topic.
“Would you allow another person to make a moral decision on your behalf? If not, why not?” He asked the same pair of questions substituting “machine” for “a person”.
Regarding the first pair of questions, we all do this kind of moral outsourcing to a certain extent – allowing our peers, writers, and public figures to influence us. However, I was surprised to find I was unusual in doing this in a deliberate and systematic manner. In the same way that I rely on someone with the right skills and tools to fix my car, I deliberately outsource a wide range of moral questions to people who I know can answer then better than I can. These people tend to be better-informed on specific issues than I am, have had more time to think them through, and in some cases are just plain better at making moral assessments. I of course select for people who have a roughly similar world view to me, and from time to time do “spot tests” – digging through their reasoning to make sure I agree with it.
We each live at the centre of a spiderweb of moral decisions – some obvious, some subtle. As a consequentialist I don’t believe that “opting out” by taking the default course or ignoring many of them absolves me of responsibility. However, I just don’t have time to research, think about, and make sound morally-informed decisions about my diet, the impact of my actions on the environment, feminism, politics, fair trade, social equality – the list goes on. So I turn to people who can, and who will make as good a decision as I would in ideal circumstances (or a better one) nine times out of ten.
So Why Shouldn’t I Trust The Machine?
So to the second pair of questions:
“Would you allow a machine to make a moral decision on your behalf? If not, why not?”
It’s plausible that in the near future we will have artificial intelligence that for given, limited situations (for example: make a medical treatment decision, a resource allocation decision, or an “acquire military target” decision) is able to weigh up the facts for a and make as a decision or better than a human can 99.99% of the time – unclouded by bias, with vastly more information available to it.
So why not trust the machine?
Human decision-making is riddled with biases and inconsistencies, and can be impacted heavily by as little as fatigue, or when we last ate. For all that, our inconsistencies are relatively predictable, and have bounds. Every bias we know about can be taken into account, and corrected for to some extent. And there are limits to how insane an intelligent, balanced person’s “wrong” decision will be – even if my moral “outsourcees” are “less right” than me 1 time out of 10, there’s a limit to how bad their wrong decision will be.
This is not necessarily the case with machines. When a machine is “wrong”, it can be wrong in a far more dramatic way, with more unpredictable outcomes, than a human could.
Simple algorithms should be extremely predictable, but can make bizarre decisions in “unusual” circumstances. Consider the two simple pricing algorithms that got in a pricing war, pushing the price of a book about flies to $23 million. Or the 2010 stock market flash crash. It gets even more difficult to keep track of when evolutionary algorithms and other “learning” methods are used. Using self-modifying heuristics Douglas Lenat’s Eurisko won the US Championship of the Traveller TCS game using unorthodox, non-intuitive fleet designs. This fun youtube video shows a Super Mario-playing greedy algorithm figuring out how to make use of several hitherto-unknown game glitches to win (see 10:47).
Why should this concern us? As the decision-making processes become more complicated, and the strategies more non-intuitive, it becomes ever-harder to “spot test” if we agree with them – provided the results turn out good the vast majority of the time. The upshot is that we have to just “trust” the methods and strategies more and more. It also becomes harder to figure out how, why, and in what circumstances the machine will go wrong – and what the magnitude of the failure will be.
Even if we are outperformed 99.99% of the time, the unpredictability of the 0.01% failures may be a good reason to consider carefully what and how we morally outsource to the machine.