Autonomy

Would Legal Assisted Suicide be the Final Triumph of Market Capitalism?

Tomorrow  in the House of Lords Lord Falconer’s bill on assisted dying will be debated. The bill would allow those who are terminally ill and likely to die within six months to request life-ending drugs from their doctor for the patients to use as and when they see fit.

As might have been expected, there has been huge discussion over the bill, but most of the arguments presented so far are not new, and the same will probably be true tomorrow. But there is one I haven’t seen before, put forward recently by Giles Fraser: that assisted suicide is the ‘final triumph of market capitalism’. Continue reading

Female genital mutilation (FGM) and male circumcision: time to confront the double standard

By Brian D. Earp

Follow Brian on Twitter by clicking here.

 

Female genital mutilation (FGM) and male circumcision: time to confront the double standard  

This month, the Guardian launched a campaign in conjunction with Change.org (the petition is here) to end “female genital mutilation” (FGM) in the UK—see Dominic Wilkinson’s recent analysis on this blog. I support this campaign and I believe that FGM is impermissible. Indeed, I think that all children, whether female, intersex, or male, should be protected from having parts of their genitals removed unless there is a pressing medical indication; I think this is so regardless of the cultural or religious affiliations of the child’s parents; and I have given some arguments for this view here, here, here, here, and here. But note that some commentators are loath to accept so broadly applied an ethical principle: to discuss FGM in the same breath as male circumcision, they think, is to “trivialize” the former and to cause all manner of moral confusion.

Consider these recent tweets by Michael Shermer, the prominent American “skeptic” and promoter of science and rationalism:

 

 

This sort of view appears to be common. One frequent claim is that FGM is analogous to “castration” or a “total penectomy,” such that any sort of comparison between it and male circumcision is entirely inappropriate (see this paper for further discussion). Some other common arguments are these:

Female genital mutilation and male circumcision are totally different. FGM is necessarily barbaric and crippling (“always torture,” according to Tanya Gold), whereas male circumcision is no big deal. Male circumcision is a “minor” intervention that might even confer health benefits, whereas FGM is a drastic intervention with no health benefits, and only causes harm. The “prime motive” for FGM is to control women’s sexuality (cf. Shermer in the tweets above); it is inherently sexist and discriminatory and is an expression of male power and domination. Male circumcision, by contrast, has nothing to do with controlling male sexuality – it’s “just a snip” and in any case “men don’t complain.” FGM eliminates the enjoyment of sex, whereas male circumcision has no meaningful effects on sexual sensation or satisfaction. It is perfectly reasonable to oppose all forms of female genital cutting while at the same time accepting or even endorsing infant male circumcision.

Yet almost every one of these claims is untrue, or is severely misleading at best. Such views derive from a superficial understanding of both FGM and male circumcision; and they are inconsistent with the latest critical scholarship concerning these and related practices. Their constant repetition in popular discourse, therefore—including by those like Shermer with a large and loyal audience base—is unhelpful to advancing moral debate.

Continue reading

Announcement: “Brave New Love” in AJOB:Neuroscience – peer commentaries due October 7

Announcement: “Brave New Love” – peer commentaries due October 7

Dear Practical Ethics readers,

The paper, “Brave new love: the threat of high-tech ‘conversion’ therapy and the bio-oppression of sexual minorities” by Brian D. Earp, Anders Sandberg, and Julian Savulescu has been accepted for publication in the American Journal of Bioethics: NeuroscienceProposals for open peer commentaries are due this Monday October 7th.

The article may be accessed here, or at the following link: http://editorial.bioethics.net. Be sure to select AJOB:Neuroscience from the drop-down menu of journals. Here is an abstract of the argument:

============================

Abstract: Our understanding of the neurochemical bases of human love and attachment, as well as of the genetic, epigenetic, hormonal, and experiential factors that conspire to shape an individual’s sexual orientation, is increasing exponentially. This research raises the vexing possibility that we may one day be equipped to modify such variables directly, allowing for the creation of “high-tech” conversion therapies or other suspect interventions. In this paper, we discuss the ethics surrounding such a possibility, and call for the development of legal and procedural safeguards for protecting vulnerable children from the application of such technology. We also consider the more difficult case of voluntary, adult “conversion” and argue that in rare cases, such attempts might be permissible under strict conditions.

============================

Open Peer Commentary articles are typically between 500-1500 words and contain no more than 10 references. A guide to writing an Open Peer Commentary is available under the Resources section “Instructions and Forms” at http://editorial.bioethics.net. AJOB:Neuroscience asks that by Monday, October 7, 2013 you submit a short summary of your proposed Open Peer Commentary (no more than 1-2 paragraphs). Please submit your proposal online via the AJOB:Neuroscience Editorial site, following the instructions provided there. They ask that you do not prepare a full commentary yet. Once they have evaluated your proposal, they will contact you via email to let you know whether or not they were able to include you on the final list of those to be asked to submit an Open Peer Commentary.

You will then have until Friday, October 25, 2013 to submit your full Open Peer Commentary.

 

Teenage annihilation on an Aegean boat

An Old Bore writes:

Last week I got the boat from Athens to Hydra. It takes about 2 ½ hours, and takes you along the coast of the Argolid.

The sun shone, the dolphins leapt, the retsina flowed, the bouzoukis trembled, and we watched the sun rise over the Peloponnese. It was wonderful. At least it was for me.

Basking on the upper deck, playing Russian roulette with malignant melanoma, were four girls, all aged around 15. They saw nothing. They stretched out on bean bags, their eyes shut throughout the voyage. They heard nothing other than what was being pumped into their ears from their IPods. They would no doubt describe themselves as friends, but they didn’t utter a word to each other. They shared nothing at all apart from their fashion sense and, no doubt, some of the music. The dolphins leapt unremarked upon. We might, so far as the girls were concerned, have been cruising past Manchester rather than Mycenae. Continue reading

Let’s Talk About Death: Millennials and Advance Directives

Sarah Riad, College of Nursing and Health Sciences, University of Massachusetts Boston

Melissa Hickey, School of Nursing, Avila University 

Kyle Edwards, Uehiro Centre for Practical Ethics, University of Oxford

As advances in medical technology have greatly increased our ability to extend life, the conversation on end-of-life care ethics has become exceedingly complex. With greater options both to end life early and extend it artificially, advance directives have arisen in an effort to preserve patient autonomy in situations in which he or she becomes incapable of making a medical decision. However, most people—especially young adults—do not think to plan for such moments of incapacity and the potentiality of an untimely death. With a youthful sense of invincibility comes a lack of foresight that prevents us from confronting these issues. The reality is that unexpected events happen. When they do, it is often very difficult to imagine what a person would have wanted and make medical decisions accordingly on his or her behalf. In this post, we suggest both a transition from action-based to value-based advance directives and an interactive website that would make the contemplation of these issues and the construction of a value-based advance directive appealing to and accessible for Millennials, the 20-somethings of today.  Continue reading

Would you hand over a moral decision to a machine? Why not? Moral outsourcing and Artificial Intelligence.

Artificial Intelligence and Human Decision-making.

Recent developments in artificial intelligence are allowing an increasing number of decisions to be passed from human to machine. Most of these to date are operational decisions – such as algorithms on the financial markets deciding what trades to make and how. However, the range of such decisions that can be computerisable are increasing, and as many operational decisions have moral consequences, they could be considered to have a moral component.

One area in which this is causing growing concern is military robotics. The degree of autonomy with which uninhabited aerial vehicles and ground robots are capable of functioning is steadily increasing. There is extensive debate over the circumstances in which robotic systems should be able to operate with a human “in the loop” or “on the loop” – and the circumstances in which a robotic system should be able to operate independently. A coalition of international NGOs recently launched a campaign to “stop killer robots”.

While there have been strong arguments raised against robotic systems being able to use lethal force against human combatants autonomously, it is becoming increasingly clear that in many circumstances in the near future the “no human in the loop” robotic system will have advantages over the “in the loop system”. Automated systems already have better perception and faster reflexes than humans in many ways, and are slowed down by the human input. The human “added value” comes from our judgement and decision-making – but these are by no means infallible, and will not always be superior to the machine’s. In June’s Centre for a New American Society (CNAS) conference, Rosa Brooks (former pentagon official, now Georgetown Law professor) put this in a provocative way:
“Our record- we’re horrible at it [making “who should live and who should die” decisions] … it seems to me that it could very easily turn out to be the case that computers are much better than we are doing. And the real ethical question would be can we ethically and lawfully not let the autonomous machines do that when they’re going to do it better than we will.” (1)

For a non-military example, consider the adaptation of IBM’s Jeopardy-winning “Watson” for use in medicine. As evidenced by IBM’s technical release this week, progress in developing these systems continues apace (shameless plug: Selmer Bringsjord, the AI researcher “putting Watson through college” will speak in Oxford about “Watson 2.0″ next month as part of the Philosophy and Theory of AI conference).

Soon we will have systems that will enter use as doctor’s aides – able to analyse the world’s medical literature to diagnose a medical problem and provide recommendations to the doctor. But it seems likely that a time will come when these thorough analyses produce recommendations that are sometimes at odds with the doctor’s recommendation – but are proven to be more accurate on average. To return to combat, we will have robotic systems that can devise and implement non-intuitive (to human) strategies that involve using lethal force, but achieve a military objective more efficiently with less loss of life. Human judgement added to the loop may prove to be an impairment.

Moral Outsourcing

At a recent academic workshop I attended on autonomy in military robotics, a speaker posed a pair of questions to test intuitions on this topic.
“Would you allow another person to make a moral decision on your behalf? If not, why not?” He asked the same pair of questions substituting “machine” for “a person”.

Regarding the first pair of questions, we all do this kind of moral outsourcing to a certain extent – allowing our peers, writers, and public figures to influence us. However, I was surprised to find I was unusual in doing this in a deliberate and systematic manner. In the same way that I rely on someone with the right skills and tools to fix my car, I deliberately outsource a wide range of moral questions to people who I know can answer then better than I can. These people tend to be better-informed on specific issues than I am, have had more time to think them through, and in some cases are just plain better at making moral assessments. I of course select for people who have a roughly similar world view to me, and from time to time do “spot tests” – digging through their reasoning to make sure I agree with it.

We each live at the centre of a spiderweb of moral decisions – some obvious, some subtle. As a consequentialist I don’t believe that “opting out” by taking the default course or ignoring many of them absolves me of responsibility. However, I just don’t have time to research, think about, and make sound morally-informed decisions about my diet, the impact of my actions on the environment, feminism, politics, fair trade, social equality – the list goes on. So I turn to people who can, and who will make as good a decision as I would in ideal circumstances (or a better one) nine times out of ten.

So Why Shouldn’t I Trust The Machine?

So to the second pair of questions:
“Would you allow a machine to make a moral decision on your behalf? If not, why not?”

It’s plausible that in the near future we will have artificial intelligence that for given, limited situations (for example: make a medical treatment decision, a resource allocation decision, or an “acquire military target” decision) is able to weigh up the facts for a and make as a decision or better than a human can 99.99% of the time – unclouded by bias, with vastly more information available to it.

So why not trust the machine?

Human decision-making is riddled with biases and inconsistencies, and can be impacted heavily by as little as fatigue, or when we last ate. For all that, our inconsistencies are relatively predictable, and have bounds. Every bias we know about can be taken into account, and corrected for to some extent. And there are limits to how insane an intelligent, balanced person’s “wrong” decision will be – even if my moral “outsourcees” are “less right” than me 1 time out of 10, there’s a limit to how bad their wrong decision will be.

This is not necessarily the case with machines. When a machine is “wrong”, it can be wrong in a far more dramatic way, with more unpredictable outcomes, than a human could.
Simple algorithms should be extremely predictable, but can make bizarre decisions in “unusual” circumstances. Consider the two simple pricing algorithms that got in a pricing war, pushing the price of a book about flies to $23 million. Or the 2010 stock market flash crash. It gets even more difficult to keep track of when evolutionary algorithms and other “learning” methods are used. Using self-modifying heuristics Douglas Lenat’s Eurisko won the US Championship of the Traveller TCS game using unorthodox, non-intuitive fleet designs. This fun youtube video shows a Super Mario-playing greedy algorithm figuring out how to make use of several hitherto-unknown game glitches to win (see 10:47).

Why should this concern us? As the decision-making processes become more complicated, and the strategies more non-intuitive, it becomes ever-harder to “spot test” if we agree with them – provided the results turn out good the vast majority of the time. The upshot is that we have to just “trust” the methods and strategies more and more. It also becomes harder to figure out how, why, and in what circumstances the machine will go wrong – and what the magnitude of the failure will be.

Even if we are outperformed 99.99% of the time, the unpredictability of the 0.01% failures may be a good reason to consider carefully what and how we morally outsource to the machine.

1. Transcript available here.
For further discussion on Brooks’s talk, see Foreign Policy Magazine articles here and here.

Casinos should say: ‘Enough. Go home.’

Over about 14 months, Harry Kakavas lost $20.5 million in a casino in Melbourne. It could have been worse. He put about $1.5 billion on the table. He sued the casino. It knew or should have known, he said, that he was a pathological gambler. It shouldn’t have continued to take his money. It should have protected him from himself. Nonsense, said the High Court of Australia.

Here’s why:

Even if, contrary to the findings of the primary judge, the appellant did suffer from a psychological impairment, the issue here is whether, in all the circumstances of the relationship between the appellant and Crown, it was sufficiently evident to Crown that the appellant was so beset by that difficulty that he was unable to make worthwhile decisions in his own interests while gambling at Crown’s casino. On the findings of fact made by the primary judge as to the course of dealings between the parties, the appellant did not show that his gambling losses were the product of the exploitation of a disability, special to the appellant, which was evident to Crown.

Equitable intervention to deprive a party of the benefit of its bargain on the basis that it was procured by unfair exploitation of the weakness of the other party requires proof of a predatory state of mind. Heedlessness of, or indifference to, the best interests of the other party is not sufficient for this purpose. The principle is not engaged by mere inadvertence, or even indifference, to the circumstances of the other party to an arm’s length commercial transaction. Inadvertence, or indifference, falls short of the victimisation or exploitation with which the principle is concerned.‘ (paras 160-161 of the judgment).

So it all turned on findings of fact (it wasn’t ‘sufficiently evident’ that his losses were the result of a disability, and if they were, they weren’t the product of a disability ‘special to the appellant.’)

That last criterion is interesting. The court seems to be implying that everyone who puts themselves in the position of losing large amounts of money in a casino is necessarily not quite right in the head. To establish liability you need a degree of vulnerability over and above that possessed by the ordinary punter. By accepting the trial judge’s finding that Kakavas did not suffer from a ‘psychological impairment’, the court was presumably saying: ‘Right: so Kakavas is weak and easily exploited: but that’s true of everyone who walks through the door, buys some chips and sits down at the table. That sort of weakness is within the general bell curve of human flabbiness. But Kakavas wasn’t particularly, dramatically, visibly weak.’ Continue reading

Cry havoc and let slip the robots of war?

Stop killer robots now, UN asks: the UN special rapporteur on extrajudicial, summary or arbitrary executions Christof Heyns has delivered a report about Lethal Autonomous Robots arguing that there should be a moratorium on the development of autonomous killing machines, at least until we can figure out the ethical and legal issues. He notes that LARs raise far-reaching concerns about the protection of life during war and peace, including whether they can comply with humanitarian and human rights law, how to device legal accountability, and “because robots should not have the power of life and death over human beings.”

Many of these issues have been discussed on this blog and elsewhere, but it is a nice comprehensive review of a number of issues brought up by the new technology. And while the machines do not yet have fully autonomous capabilities the distance to them is chillingly short: dismissing the issue as science fiction is myopic, especially given the slowness of actually reaching legal agreements. However, does it make sense to say that robots should not have the power of life and death over human beings?

Continue reading

Cultural bias and the evaluation of medical evidence: An update on the AAP

By Brian D. Earp

Follow Brian on Twitter by clicking here.

Cultural bias and the evaluation of medical evidence: An update on the AAP

Since my article on the American Academy of Pediatrics’ recent change in policy regarding infant male circumcision was posted back in August of 2012, some interesting developments have come about. Two major critiques of the AAP documents were published in leading international journals, one in the Journal of Medical Ethics, and a second in the AAP’s very own PediatricsIn the second of these, 38 distinguished pediatricians, pediatric surgeons, urologists, medical ethicists, and heads of hospital boards and children’s health societies throughout Europe and Canada argued that there is: “Cultural Bias in the AAP’s 2012 Technical Report and Policy Statement on Male Circumcision.”

The AAP took the time to respond to this possibility in a formal reply, also published in Pediatrics earlier this year. Rather than thoughtfully addressing the specific charge of cultural bias, however, the AAP elected to boomerang the criticism, implying that their critics were themselves biased, only against circumcision. To address this interesting allegation, I have updated my original blog post. Interested readers can click here to see my analysis.

Finally, please note that articles from the Journal of Medical Ethics special issue on circumcision are (at long last) beginning to appear online. The print issue will follow shortly. Also be sure to see this recent critique of the AAP in a thoughtful book by JME contributor and medical historian Dr. Robert Darby, entitled: “The Sorcerer’s Apprentice: Why Can’t the US Stop Circumcising Boys?”

– BDE 

Authors

Affiliations