Female genital mutilation (FGM) and male circumcision: time to confront the double standard
This month, the Guardian launched a campaign in conjunction with Change.org (the petition is here) to end “female genital mutilation” (FGM) in the UK—see Dominic Wilkinson’s recent analysis on this blog. I support this campaign and I believe that FGM is impermissible. Indeed, I think that all children, whether female, intersex, or male, should be protected from having parts of their genitals removed unless there is a pressing medical indication; I think this is so regardless of the cultural or religious affiliations of the child’s parents; and I have given some arguments for this view here, here, here, here, and here. But note that some commentators are loath to accept so broadly applied an ethical principle: to discuss FGM in the same breath as male circumcision, they think, is to “trivialize” the former and to cause all manner of moral confusion.
Consider these recent tweets by Michael Shermer, the prominent American “skeptic” and promoter of science and rationalism:
This sort of view appears to be common. One frequent claim is that FGM is analogous to “castration” or a “total penectomy,” such that any sort of comparison between it and male circumcision is entirely inappropriate (see this paper for further discussion). Some other common arguments are these:
Female genital mutilation and male circumcision are totally different. FGM is necessarily barbaric and crippling (“always torture,” according to Tanya Gold), whereas male circumcision is no big deal. Male circumcision is a “minor” intervention that might even confer health benefits, whereas FGM is a drastic intervention with no health benefits, and only causes harm. The “prime motive” for FGM is to control women’s sexuality (cf. Shermer in the tweets above); it is inherently sexist and discriminatory and is an expression of male power and domination. Male circumcision, by contrast, has nothing to do with controlling male sexuality – it’s “just a snip” and in any case “men don’t complain.” FGM eliminates the enjoyment of sex, whereas male circumcision has no meaningful effects on sexual sensation or satisfaction. It is perfectly reasonable to oppose all forms of female genital cutting while at the same time accepting or even endorsing infant male circumcision.
Yet almost every one of these claims is untrue, or is severely misleading at best. Such views derive from a superficial understanding of both FGM and male circumcision; and they are inconsistent with the latest critical scholarship concerning these and related practices. Their constant repetition in popular discourse, therefore—including by those like Shermer with a large and loyal audience base—is unhelpful to advancing moral debate.
In a particularly eye-catching pull quote in the November issue of The Atlantic, journalist and scholar Robert Wright claims, “The world’s gravest conflicts are not over ethical principles or disputed values but over disputed facts.”
The essay, called “Why We Fight – And Can We Stop?” in the print version and “Why Can’t We All Just Get Along? The Uncertain Biological Basis of Morality” in the online version, reviews new research by psychologists Joshua Greene and Paul Bloom on the biological foundations of our moral impulses. Focusing mainly on Greene’s newest book, Moral Tribes: Emotion, Reason, and the Gap Between Us and Them, Wright details Greene’s proposed solution to the rampant group conflict we see both domestically and internationally. Suggesting that we are evolutionarily wired to cooperate or ‘get along’ with members of groups to which we belong, Greene identifies the key cause of fighting as different groups’ “incompatible visions of what a moral society should be.” And his answer is to strive for a ‘metamorality’ – a universally shared moral perspective (he suggests utilitarianism) that would create a global in-group thus facilitating cooperation.
Announcement: “Brave New Love” – peer commentaries due October 7
Dear Practical Ethics readers,
The paper, “Brave new love: the threat of high-tech ‘conversion’ therapy and the bio-oppression of sexual minorities” by Brian D. Earp, Anders Sandberg, and Julian Savulescu has been accepted for publication in the American Journal of Bioethics: Neuroscience. Proposals for open peer commentaries are due this Monday October 7th.
The article may be accessed here, or at the following link: http://editorial.bioethics.net. Be sure to select AJOB:Neuroscience from the drop-down menu of journals. Here is an abstract of the argument:
Abstract: Our understanding of the neurochemical bases of human love and attachment, as well as of the genetic, epigenetic, hormonal, and experiential factors that conspire to shape an individual’s sexual orientation, is increasing exponentially. This research raises the vexing possibility that we may one day be equipped to modify such variables directly, allowing for the creation of “high-tech” conversion therapies or other suspect interventions. In this paper, we discuss the ethics surrounding such a possibility, and call for the development of legal and procedural safeguards for protecting vulnerable children from the application of such technology. We also consider the more difficult case of voluntary, adult “conversion” and argue that in rare cases, such attempts might be permissible under strict conditions.
Open Peer Commentary articles are typically between 500-1500 words and contain no more than 10 references. A guide to writing an Open Peer Commentary is available under the Resources section “Instructions and Forms” at http://editorial.bioethics.net. AJOB:Neuroscience asks that by Monday, October 7, 2013 you submit a short summary of your proposed Open Peer Commentary (no more than 1-2 paragraphs). Please submit your proposal online via the AJOB:Neuroscience Editorial site, following the instructions provided there. They ask that you do not prepare a full commentary yet. Once they have evaluated your proposal, they will contact you via email to let you know whether or not they were able to include you on the final list of those to be asked to submit an Open Peer Commentary.
You will then have until Friday, October 25, 2013 to submit your full Open Peer Commentary.
Twitter, paywalls, and access to scholarship — are license agreements too restrictive?
I think I may have done something unethical today. But I’m not quite sure, dear reader, so I’m enlisting your energy to help me think things through. Here’s the short story:
Someone posted a link to an interesting-looking article by Caroline Williams at New Scientist – on the “myth” that we should live and eat like cavemen in order to match our lifestyle to that of our evolutionary ancestors, and thereby maximize health. Now, I assume that when you click on the link I just gave you (unless you’re a New Scientist subscriber), you get a short little blurb from the beginning of the article and then–of course–it dissolves into an ellipsis as soon as things start to get interesting:
Our bodies didn’t evolve for lying on a sofa watching TV and eating chips and ice cream. They evolved for running around hunting game and gathering fruit and vegetables. So, the myth goes, we’d all be a lot healthier if we lived and ate more like our ancestors. This “evolutionary discordance hypothesis” was first put forward in 1985 by medic S. Boyd Eaton and anthropologist Melvin Konner …
Holy crap! The “evolutionary discordance hypothesis” is a myth? I hope not, because I’ve been using some similar ideas in a lot of my arguments about neuroenhancement recently. So I thought I should really plunge forward and read the rest of the article. Unfortunately, I don’t have a subscription to New Scientist, and when I logged into my Oxford VPN-thingy, I discovered that Oxford doesn’t have access either. Weird. What was I to do?
Since I typically have at least one eye glued to my Twitter account, it occurred to me that I could send a quick tweet around to check if anyone had the PDF and would be willing to send it to me in an email. The majority of my “followers” are fellow academics, and I’ve seen this strategy play out before — usually when someone’s institutional log-in isn’t working, or when a key article is behind a pay-wall at one of those big “bundling” publishers that everyone seems to hold in such low regard. Another tack would be to dash off an email to a couple of colleagues of mine, and I could “CC” the five or six others who seem likeliest to be New Scientist subscribers. In any case, I went for the tweet.
Sure enough, an hour or so later, a chemist friend of mine sent me a message to “check my email” and there was the PDF of the “caveman” article, just waiting to be devoured. I read it. It turns out that the “evolutionary discordance hypothesis” is basically safe and sound, although it may need some tweaking and updates. Phew. On to other things.
But then something interesting happened! Whoever it is that manages the New Scientist Twitter account suddenly shows up in my Twitter feed with a couple of carefully-worded replies to my earlier PDF-seeking hail-mary:
Would you hand over a moral decision to a machine? Why not? Moral outsourcing and Artificial Intelligence.
Artificial Intelligence and Human Decision-making.
Recent developments in artificial intelligence are allowing an increasing number of decisions to be passed from human to machine. Most of these to date are operational decisions – such as algorithms on the financial markets deciding what trades to make and how. However, the range of such decisions that can be computerisable are increasing, and as many operational decisions have moral consequences, they could be considered to have a moral component.
One area in which this is causing growing concern is military robotics. The degree of autonomy with which uninhabited aerial vehicles and ground robots are capable of functioning is steadily increasing. There is extensive debate over the circumstances in which robotic systems should be able to operate with a human “in the loop” or “on the loop” – and the circumstances in which a robotic system should be able to operate independently. A coalition of international NGOs recently launched a campaign to “stop killer robots”.
While there have been strong arguments raised against robotic systems being able to use lethal force against human combatants autonomously, it is becoming increasingly clear that in many circumstances in the near future the “no human in the loop” robotic system will have advantages over the “in the loop system”. Automated systems already have better perception and faster reflexes than humans in many ways, and are slowed down by the human input. The human “added value” comes from our judgement and decision-making – but these are by no means infallible, and will not always be superior to the machine’s. In June’s Centre for a New American Society (CNAS) conference, Rosa Brooks (former pentagon official, now Georgetown Law professor) put this in a provocative way:
“Our record- we’re horrible at it [making “who should live and who should die” decisions] … it seems to me that it could very easily turn out to be the case that computers are much better than we are doing. And the real ethical question would be can we ethically and lawfully not let the autonomous machines do that when they’re going to do it better than we will.” (1)
For a non-military example, consider the adaptation of IBM’s Jeopardy-winning “Watson” for use in medicine. As evidenced by IBM’s technical release this week, progress in developing these systems continues apace (shameless plug: Selmer Bringsjord, the AI researcher “putting Watson through college” will speak in Oxford about “Watson 2.0″ next month as part of the Philosophy and Theory of AI conference).
Soon we will have systems that will enter use as doctor’s aides – able to analyse the world’s medical literature to diagnose a medical problem and provide recommendations to the doctor. But it seems likely that a time will come when these thorough analyses produce recommendations that are sometimes at odds with the doctor’s recommendation – but are proven to be more accurate on average. To return to combat, we will have robotic systems that can devise and implement non-intuitive (to human) strategies that involve using lethal force, but achieve a military objective more efficiently with less loss of life. Human judgement added to the loop may prove to be an impairment.
At a recent academic workshop I attended on autonomy in military robotics, a speaker posed a pair of questions to test intuitions on this topic.
“Would you allow another person to make a moral decision on your behalf? If not, why not?” He asked the same pair of questions substituting “machine” for “a person”.
Regarding the first pair of questions, we all do this kind of moral outsourcing to a certain extent – allowing our peers, writers, and public figures to influence us. However, I was surprised to find I was unusual in doing this in a deliberate and systematic manner. In the same way that I rely on someone with the right skills and tools to fix my car, I deliberately outsource a wide range of moral questions to people who I know can answer then better than I can. These people tend to be better-informed on specific issues than I am, have had more time to think them through, and in some cases are just plain better at making moral assessments. I of course select for people who have a roughly similar world view to me, and from time to time do “spot tests” – digging through their reasoning to make sure I agree with it.
We each live at the centre of a spiderweb of moral decisions – some obvious, some subtle. As a consequentialist I don’t believe that “opting out” by taking the default course or ignoring many of them absolves me of responsibility. However, I just don’t have time to research, think about, and make sound morally-informed decisions about my diet, the impact of my actions on the environment, feminism, politics, fair trade, social equality – the list goes on. So I turn to people who can, and who will make as good a decision as I would in ideal circumstances (or a better one) nine times out of ten.
So Why Shouldn’t I Trust The Machine?
So to the second pair of questions:
“Would you allow a machine to make a moral decision on your behalf? If not, why not?”
It’s plausible that in the near future we will have artificial intelligence that for given, limited situations (for example: make a medical treatment decision, a resource allocation decision, or an “acquire military target” decision) is able to weigh up the facts for a and make as a decision or better than a human can 99.99% of the time – unclouded by bias, with vastly more information available to it.
So why not trust the machine?
Human decision-making is riddled with biases and inconsistencies, and can be impacted heavily by as little as fatigue, or when we last ate. For all that, our inconsistencies are relatively predictable, and have bounds. Every bias we know about can be taken into account, and corrected for to some extent. And there are limits to how insane an intelligent, balanced person’s “wrong” decision will be – even if my moral “outsourcees” are “less right” than me 1 time out of 10, there’s a limit to how bad their wrong decision will be.
This is not necessarily the case with machines. When a machine is “wrong”, it can be wrong in a far more dramatic way, with more unpredictable outcomes, than a human could.
Simple algorithms should be extremely predictable, but can make bizarre decisions in “unusual” circumstances. Consider the two simple pricing algorithms that got in a pricing war, pushing the price of a book about flies to $23 million. Or the 2010 stock market flash crash. It gets even more difficult to keep track of when evolutionary algorithms and other “learning” methods are used. Using self-modifying heuristics Douglas Lenat’s Eurisko won the US Championship of the Traveller TCS game using unorthodox, non-intuitive fleet designs. This fun youtube video shows a Super Mario-playing greedy algorithm figuring out how to make use of several hitherto-unknown game glitches to win (see 10:47).
Why should this concern us? As the decision-making processes become more complicated, and the strategies more non-intuitive, it becomes ever-harder to “spot test” if we agree with them – provided the results turn out good the vast majority of the time. The upshot is that we have to just “trust” the methods and strategies more and more. It also becomes harder to figure out how, why, and in what circumstances the machine will go wrong – and what the magnitude of the failure will be.
Even if we are outperformed 99.99% of the time, the unpredictability of the 0.01% failures may be a good reason to consider carefully what and how we morally outsource to the machine.
Cultural bias and the evaluation of medical evidence: An update on the AAP
Since my article on the American Academy of Pediatrics’ recent change in policy regarding infant male circumcision was posted back in August of 2012, some interesting developments have come about. Two major critiques of the AAP documents were published in leading international journals, one in the Journal of Medical Ethics, and a second in the AAP’s very own Pediatrics. In the second of these, 38 distinguished pediatricians, pediatric surgeons, urologists, medical ethicists, and heads of hospital boards and children’s health societies throughout Europe and Canada argued that there is: “Cultural Bias in the AAP’s 2012 Technical Report and Policy Statement on Male Circumcision.”
The AAP took the time to respond to this possibility in a formal reply, also published in Pediatrics earlier this year. Rather than thoughtfully addressing the specific charge of cultural bias, however, the AAP elected to boomerang the criticism, implying that their critics were themselves biased, only against circumcision. To address this interesting allegation, I have updated my original blog post. Interested readers can click here to see my analysis.
Finally, please note that articles from the Journal of Medical Ethics special issue on circumcision are (at long last) beginning to appear online. The print issue will follow shortly. Also be sure to see this recent critique of the AAP in a thoughtful book by JME contributor and medical historian Dr. Robert Darby, entitled: “The Sorcerer’s Apprentice: Why Can’t the US Stop Circumcising Boys?”
This is a brief note to alert the readers of Practical Ethics that research by myself, Anders Sandberg, and Julian Savulescu on the potential therapeutic uses of “love drugs” and “anti-love drugs” has recently been featured in an interview for the national Canadian broadcast program, “Q” with Jian Ghomeshi (airing on National Public Radio in the United States).
Readers may also be interested in checking out a new website, “Love in the Age of Enhancement” which collects the various academic essays, magazine articles, and media coverage of these arguments concerning the neuroenhancement of human relationships.
The first two weeks of 2013 were marked by a flurry of news articles considering “the new science” of pedophilia. Alan Zarembo’s article for the Los Angeles Times focused on the increasing consensus among researchers that pedophilia is a biological predisposition similar to heterosexuality or homosexuality. Rachel Aviv’s piece for The New Yorker shed light upon the practice of ‘civil commitment’ in the US, a process by which inmates may be kept in jail past their release date if a panel decides that they are at risk of molesting a child (even if there is no evidence that they have in the past). The Guardian’s Jon Henley quoted sources suggesting that perhaps some pedophilic relationships aren’t all that harmful after all. And Rush Limbaugh chimed in comparing the ‘normalization’ of pedophilia to the historical increase in the acceptance of homosexuality, suggesting that recognizing pedophilia as a sexual orientation would be tantamount to condoning child molestation.
So what does it all mean? While most people I talked to in the wake of these stories (I include myself) were fascinated by the novel scientific evidence and the compelling profiles of self-described pedophiles presented in these articles, we all seemed to have a difficult time wrapping our minds around the ethical considerations at play. Why does it matter for our moral appraisal of pedophiles whether pedophilia is innate or acquired? Is it wrong to imprison someone for a terrible crime that they have not yet committed but are at a “high risk” of committing in the future? And if we say that we can’t “blame” pedophiles for their attraction to children because it is not their “fault” – they were “born this way” – is it problematic to condemn individuals for acting upon these (and other harmful) desires if it can be shown that poor impulse control is similarly genetically predisposed? While I don’t get around to fully answering most of these questions in the following post, my aim is to tease out the highly interrelated issues underlying these questions with the goal of working towards a framework by which the moral landscape of pedophilia can be understood. Continue reading