morality

Being a Good Person by Deceit?

By Nadira Faulmüller & Lucius Caviola

Recently, Peter Singer, Paul Bloom and Dan Ariely were discussing topics surrounding the psychology of morality. Peter was emphasizing the importance of helping people in need by donating money to poverty fighting charities. That’s easier said than done. Humans don’t seem to have a strong innate desire of helping distant strangers. So the question arises of how we can motivate people to donate considerable amounts to charity. Peter suggested that respective social norms could be established: in order to make people more moral their behaviour needs to be observable by others, as Dan pointed out, only then they will be motivated to help strangers on the other side of the world. Is this true? – do people only behave prosocially because they feel socially pressured into doing so?

Continue reading

Female genital mutilation (FGM) and male circumcision: time to confront the double standard

By Brian D. Earp

Follow Brian on Twitter by clicking here.

 

Female genital mutilation (FGM) and male circumcision: time to confront the double standard  

This month, the Guardian launched a campaign in conjunction with Change.org (the petition is here) to end “female genital mutilation” (FGM) in the UK—see Dominic Wilkinson’s recent analysis on this blog. I support this campaign and I believe that FGM is impermissible. Indeed, I think that all children, whether female, intersex, or male, should be protected from having parts of their genitals removed unless there is a pressing medical indication; I think this is so regardless of the cultural or religious affiliations of the child’s parents; and I have given some arguments for this view here, here, here, here, and here. But note that some commentators are loath to accept so broadly applied an ethical principle: to discuss FGM in the same breath as male circumcision, they think, is to “trivialize” the former and to cause all manner of moral confusion.

Consider these recent tweets by Michael Shermer, the prominent American “skeptic” and promoter of science and rationalism:

 

 

This sort of view appears to be common. One frequent claim is that FGM is analogous to “castration” or a “total penectomy,” such that any sort of comparison between it and male circumcision is entirely inappropriate (see this paper for further discussion). Some other common arguments are these:

Female genital mutilation and male circumcision are totally different. FGM is necessarily barbaric and crippling (“always torture,” according to Tanya Gold), whereas male circumcision is no big deal. Male circumcision is a “minor” intervention that might even confer health benefits, whereas FGM is a drastic intervention with no health benefits, and only causes harm. The “prime motive” for FGM is to control women’s sexuality (cf. Shermer in the tweets above); it is inherently sexist and discriminatory and is an expression of male power and domination. Male circumcision, by contrast, has nothing to do with controlling male sexuality – it’s “just a snip” and in any case “men don’t complain.” FGM eliminates the enjoyment of sex, whereas male circumcision has no meaningful effects on sexual sensation or satisfaction. It is perfectly reasonable to oppose all forms of female genital cutting while at the same time accepting or even endorsing infant male circumcision.

Yet almost every one of these claims is untrue, or is severely misleading at best. Such views derive from a superficial understanding of both FGM and male circumcision; and they are inconsistent with the latest critical scholarship concerning these and related practices. Their constant repetition in popular discourse, therefore—including by those like Shermer with a large and loyal audience base—is unhelpful to advancing moral debate.

Continue reading

Emergence’s devil haunts the moral enhancer’s kingdom come

It is 2025. Society has increasingly realised the importance of breaking evolution’s chains and enhancing the human condition. Large grants are awarded for building sci-fi-like laboratories to search for and create the ultimate moral enhancer. After just a few years, humanity believes it has made one of its most major breakthroughs: a pill which will rid our morality of all its faults. Without any side-effects, it vastly increases our ability to cooperate and to think rationally on moral issues, while also enhancing our empathy and our compassion for the whole of humanity. By shifting individuals’ socio-value orientation towards cooperation, this pill will allow us to build safe, efficient and peaceful societies. It will cast a pro-social paradise on earth, the moral enhancer kingdom come.

I believe we better think twice before endeavouring ourselves into this pro-social paradise on the cheap. Not because we will lose “the X factor”, not because it will violate autonomy, and not because such a drug would cause us to exit our own species. Even if all those objections are refuted, even if the drug has no side-effects, even if each and every human being, by miracle, willingly takes the drug without any coercion whatsoever, even then, I contend we could still have trouble.

Continue reading

What Fuels the Fighting: Disagreement over Facts or Values?

In a particularly eye-catching pull quote in the November issue of The Atlantic, journalist and scholar Robert Wright claims, “The world’s gravest conflicts are not over ethical principles or disputed values but over disputed facts.”[1]

The essay, called “Why We Fight – And Can We Stop?” in the print version and “Why Can’t We All Just Get Along? The Uncertain Biological Basis of Morality” in the online version, reviews new research by psychologists Joshua Greene and Paul Bloom on the biological foundations of our moral impulses. Focusing mainly on Greene’s newest book, Moral Tribes: Emotion, Reason, and the Gap Between Us and Them, Wright details Greene’s proposed solution to the rampant group conflict we see both domestically and internationally. Suggesting that we are evolutionarily wired to cooperate or ‘get along’ with members of groups to which we belong, Greene identifies the key cause of fighting as different groups’ “incompatible visions of what a moral society should be.”[2] And his answer is to strive for a ‘metamorality’ – a universally shared moral perspective (he suggests utilitarianism) that would create a global in-group thus facilitating cooperation.

Continue reading

Announcement: “Brave New Love” in AJOB:Neuroscience – peer commentaries due October 7

Announcement: “Brave New Love” – peer commentaries due October 7

Dear Practical Ethics readers,

The paper, “Brave new love: the threat of high-tech ‘conversion’ therapy and the bio-oppression of sexual minorities” by Brian D. Earp, Anders Sandberg, and Julian Savulescu has been accepted for publication in the American Journal of Bioethics: NeuroscienceProposals for open peer commentaries are due this Monday October 7th.

The article may be accessed here, or at the following link: http://editorial.bioethics.net. Be sure to select AJOB:Neuroscience from the drop-down menu of journals. Here is an abstract of the argument:

============================

Abstract: Our understanding of the neurochemical bases of human love and attachment, as well as of the genetic, epigenetic, hormonal, and experiential factors that conspire to shape an individual’s sexual orientation, is increasing exponentially. This research raises the vexing possibility that we may one day be equipped to modify such variables directly, allowing for the creation of “high-tech” conversion therapies or other suspect interventions. In this paper, we discuss the ethics surrounding such a possibility, and call for the development of legal and procedural safeguards for protecting vulnerable children from the application of such technology. We also consider the more difficult case of voluntary, adult “conversion” and argue that in rare cases, such attempts might be permissible under strict conditions.

============================

Open Peer Commentary articles are typically between 500-1500 words and contain no more than 10 references. A guide to writing an Open Peer Commentary is available under the Resources section “Instructions and Forms” at http://editorial.bioethics.net. AJOB:Neuroscience asks that by Monday, October 7, 2013 you submit a short summary of your proposed Open Peer Commentary (no more than 1-2 paragraphs). Please submit your proposal online via the AJOB:Neuroscience Editorial site, following the instructions provided there. They ask that you do not prepare a full commentary yet. Once they have evaluated your proposal, they will contact you via email to let you know whether or not they were able to include you on the final list of those to be asked to submit an Open Peer Commentary.

You will then have until Friday, October 25, 2013 to submit your full Open Peer Commentary.

 

Twitter, paywalls, and access to scholarship — are license agreements too restrictive?

By Brian D. Earp

Follow Brian on Twitter by clicking here.

Twitter, paywalls, and access to scholarship — are license agreements too restrictive? 

I think I may have done something unethical today. But I’m not quite sure, dear reader, so I’m enlisting your energy to help me think things through. Here’s the short story:

Someone posted a link to an interesting-looking article by Caroline Williams at New Scientist – on the “myth” that we should live and eat like cavemen in order to match our lifestyle to that of our evolutionary ancestors, and thereby maximize health. Now, I assume that when you click on the link I just gave you (unless you’re a New Scientist subscriber), you get a short little blurb from the beginning of the article and then–of course–it dissolves into an ellipsis as soon as things start to get interesting:

Our bodies didn’t evolve for lying on a sofa watching TV and eating chips and ice cream. They evolved for running around hunting game and gathering fruit and vegetables. So, the myth goes, we’d all be a lot healthier if we lived and ate more like our ancestors. This “evolutionary discordance hypothesis” was first put forward in 1985 by medic S. Boyd Eaton and anthropologist Melvin Konner …

Holy crap! The “evolutionary discordance hypothesis” is a myth? I hope not, because I’ve been using some similar ideas in a lot of my arguments about neuroenhancement recently. So I thought I should really plunge forward and read the rest of the article. Unfortunately, I don’t have a subscription to New Scientist, and when I logged into my Oxford VPN-thingy, I discovered that Oxford doesn’t have access either. Weird. What was I to do?

Since I typically have at least one eye glued to my Twitter account, it occurred to me that I could send a quick tweet around to check if anyone had the PDF and would be willing to send it to me in an email. The majority of my “followers” are fellow academics, and I’ve seen this strategy play out before — usually when someone’s institutional log-in isn’t working, or when a key article is behind a pay-wall at one of those big “bundling” publishers that everyone seems to hold in such low regard. Another tack would be to dash off an email to a couple of colleagues of mine, and I could “CC” the five or six others who seem likeliest to be New Scientist subscribers. In any case, I went for the tweet.

Sure enough, an hour or so later, a chemist friend of mine sent me a message to “check my email” and there was the PDF of the “caveman” article, just waiting to be devoured. I read it. It turns out that the “evolutionary discordance hypothesis” is basically safe and sound, although it may need some tweaking and updates. Phew. On to other things.

But then something interesting happened! Whoever it is that manages the New Scientist Twitter account suddenly shows up in my Twitter feed with a couple of carefully-worded replies to my earlier PDF-seeking hail-mary:

Continue reading

Would you hand over a moral decision to a machine? Why not? Moral outsourcing and Artificial Intelligence.

Artificial Intelligence and Human Decision-making.

Recent developments in artificial intelligence are allowing an increasing number of decisions to be passed from human to machine. Most of these to date are operational decisions – such as algorithms on the financial markets deciding what trades to make and how. However, the range of such decisions that can be computerisable are increasing, and as many operational decisions have moral consequences, they could be considered to have a moral component.

One area in which this is causing growing concern is military robotics. The degree of autonomy with which uninhabited aerial vehicles and ground robots are capable of functioning is steadily increasing. There is extensive debate over the circumstances in which robotic systems should be able to operate with a human “in the loop” or “on the loop” – and the circumstances in which a robotic system should be able to operate independently. A coalition of international NGOs recently launched a campaign to “stop killer robots”.

While there have been strong arguments raised against robotic systems being able to use lethal force against human combatants autonomously, it is becoming increasingly clear that in many circumstances in the near future the “no human in the loop” robotic system will have advantages over the “in the loop system”. Automated systems already have better perception and faster reflexes than humans in many ways, and are slowed down by the human input. The human “added value” comes from our judgement and decision-making – but these are by no means infallible, and will not always be superior to the machine’s. In June’s Centre for a New American Society (CNAS) conference, Rosa Brooks (former pentagon official, now Georgetown Law professor) put this in a provocative way:
“Our record- we’re horrible at it [making “who should live and who should die” decisions] … it seems to me that it could very easily turn out to be the case that computers are much better than we are doing. And the real ethical question would be can we ethically and lawfully not let the autonomous machines do that when they’re going to do it better than we will.” (1)

For a non-military example, consider the adaptation of IBM’s Jeopardy-winning “Watson” for use in medicine. As evidenced by IBM’s technical release this week, progress in developing these systems continues apace (shameless plug: Selmer Bringsjord, the AI researcher “putting Watson through college” will speak in Oxford about “Watson 2.0″ next month as part of the Philosophy and Theory of AI conference).

Soon we will have systems that will enter use as doctor’s aides – able to analyse the world’s medical literature to diagnose a medical problem and provide recommendations to the doctor. But it seems likely that a time will come when these thorough analyses produce recommendations that are sometimes at odds with the doctor’s recommendation – but are proven to be more accurate on average. To return to combat, we will have robotic systems that can devise and implement non-intuitive (to human) strategies that involve using lethal force, but achieve a military objective more efficiently with less loss of life. Human judgement added to the loop may prove to be an impairment.

Moral Outsourcing

At a recent academic workshop I attended on autonomy in military robotics, a speaker posed a pair of questions to test intuitions on this topic.
“Would you allow another person to make a moral decision on your behalf? If not, why not?” He asked the same pair of questions substituting “machine” for “a person”.

Regarding the first pair of questions, we all do this kind of moral outsourcing to a certain extent – allowing our peers, writers, and public figures to influence us. However, I was surprised to find I was unusual in doing this in a deliberate and systematic manner. In the same way that I rely on someone with the right skills and tools to fix my car, I deliberately outsource a wide range of moral questions to people who I know can answer then better than I can. These people tend to be better-informed on specific issues than I am, have had more time to think them through, and in some cases are just plain better at making moral assessments. I of course select for people who have a roughly similar world view to me, and from time to time do “spot tests” – digging through their reasoning to make sure I agree with it.

We each live at the centre of a spiderweb of moral decisions – some obvious, some subtle. As a consequentialist I don’t believe that “opting out” by taking the default course or ignoring many of them absolves me of responsibility. However, I just don’t have time to research, think about, and make sound morally-informed decisions about my diet, the impact of my actions on the environment, feminism, politics, fair trade, social equality – the list goes on. So I turn to people who can, and who will make as good a decision as I would in ideal circumstances (or a better one) nine times out of ten.

So Why Shouldn’t I Trust The Machine?

So to the second pair of questions:
“Would you allow a machine to make a moral decision on your behalf? If not, why not?”

It’s plausible that in the near future we will have artificial intelligence that for given, limited situations (for example: make a medical treatment decision, a resource allocation decision, or an “acquire military target” decision) is able to weigh up the facts for a and make as a decision or better than a human can 99.99% of the time – unclouded by bias, with vastly more information available to it.

So why not trust the machine?

Human decision-making is riddled with biases and inconsistencies, and can be impacted heavily by as little as fatigue, or when we last ate. For all that, our inconsistencies are relatively predictable, and have bounds. Every bias we know about can be taken into account, and corrected for to some extent. And there are limits to how insane an intelligent, balanced person’s “wrong” decision will be – even if my moral “outsourcees” are “less right” than me 1 time out of 10, there’s a limit to how bad their wrong decision will be.

This is not necessarily the case with machines. When a machine is “wrong”, it can be wrong in a far more dramatic way, with more unpredictable outcomes, than a human could.
Simple algorithms should be extremely predictable, but can make bizarre decisions in “unusual” circumstances. Consider the two simple pricing algorithms that got in a pricing war, pushing the price of a book about flies to $23 million. Or the 2010 stock market flash crash. It gets even more difficult to keep track of when evolutionary algorithms and other “learning” methods are used. Using self-modifying heuristics Douglas Lenat’s Eurisko won the US Championship of the Traveller TCS game using unorthodox, non-intuitive fleet designs. This fun youtube video shows a Super Mario-playing greedy algorithm figuring out how to make use of several hitherto-unknown game glitches to win (see 10:47).

Why should this concern us? As the decision-making processes become more complicated, and the strategies more non-intuitive, it becomes ever-harder to “spot test” if we agree with them – provided the results turn out good the vast majority of the time. The upshot is that we have to just “trust” the methods and strategies more and more. It also becomes harder to figure out how, why, and in what circumstances the machine will go wrong – and what the magnitude of the failure will be.

Even if we are outperformed 99.99% of the time, the unpredictability of the 0.01% failures may be a good reason to consider carefully what and how we morally outsource to the machine.

1. Transcript available here.
For further discussion on Brooks’s talk, see Foreign Policy Magazine articles here and here.

Cultural bias and the evaluation of medical evidence: An update on the AAP

By Brian D. Earp

Follow Brian on Twitter by clicking here.

Cultural bias and the evaluation of medical evidence: An update on the AAP

Since my article on the American Academy of Pediatrics’ recent change in policy regarding infant male circumcision was posted back in August of 2012, some interesting developments have come about. Two major critiques of the AAP documents were published in leading international journals, one in the Journal of Medical Ethics, and a second in the AAP’s very own PediatricsIn the second of these, 38 distinguished pediatricians, pediatric surgeons, urologists, medical ethicists, and heads of hospital boards and children’s health societies throughout Europe and Canada argued that there is: “Cultural Bias in the AAP’s 2012 Technical Report and Policy Statement on Male Circumcision.”

The AAP took the time to respond to this possibility in a formal reply, also published in Pediatrics earlier this year. Rather than thoughtfully addressing the specific charge of cultural bias, however, the AAP elected to boomerang the criticism, implying that their critics were themselves biased, only against circumcision. To address this interesting allegation, I have updated my original blog post. Interested readers can click here to see my analysis.

Finally, please note that articles from the Journal of Medical Ethics special issue on circumcision are (at long last) beginning to appear online. The print issue will follow shortly. Also be sure to see this recent critique of the AAP in a thoughtful book by JME contributor and medical historian Dr. Robert Darby, entitled: “The Sorcerer’s Apprentice: Why Can’t the US Stop Circumcising Boys?”

– BDE 

How to deal with double-edged technology

By Brian D. Earp

 World’s smallest drone? Or how to deal with double-edged technology 

BBC News reports that Harvard scientists have developed the world’s smallest flying robot. It’s about the size of a penny, and it moves faster than a human hand can swat. Of course, the inventors of this “diminutive flying vehicle” immediately lauded its potential for bringing good to the world:

1. “We could envision these robots being used for search-and-rescue operations to search for human survivors under collapsed buildings or [in] other hazardous environments.”

2. “They [could] be used for environmental monitoring, to be dispersed into a habitat to sense trace chemicals or other factors.”

3. They might even behave like many real insects and assist with the pollination of crops, “to function as the now-struggling honeybee populations do in supporting agriculture around the world.”

These all seem like pretty commendable uses of a new technology. Yet one can think of some “bad” uses too. The “search and rescue” version of this robot (for example) would presumably be fitted with a camera; and the prospect of a swarm of tiny, remote-controlled flying video recorders raises some obvious questions about spying and privacy. It also prompts one to wonder who will have access to these spy bugs (the U.S. Air Force has long been interested in building miniature espionage drones), and whether there will be effective regulatory strategies capable of tilting future usage more toward the search-and-rescue side of things, and away from the peep-and-record side.

Continue reading

Authors

Affiliations