morality

What Fuels the Fighting: Disagreement over Facts or Values?

In a particularly eye-catching pull quote in the November issue of The Atlantic, journalist and scholar Robert Wright claims, “The world’s gravest conflicts are not over ethical principles or disputed values but over disputed facts.”[1]

The essay, called “Why We Fight – And Can We Stop?” in the print version and “Why Can’t We All Just Get Along? The Uncertain Biological Basis of Morality” in the online version, reviews new research by psychologists Joshua Greene and Paul Bloom on the biological foundations of our moral impulses. Focusing mainly on Greene’s newest book, Moral Tribes: Emotion, Reason, and the Gap Between Us and Them, Wright details Greene’s proposed solution to the rampant group conflict we see both domestically and internationally. Suggesting that we are evolutionarily wired to cooperate or ‘get along’ with members of groups to which we belong, Greene identifies the key cause of fighting as different groups’ “incompatible visions of what a moral society should be.”[2] And his answer is to strive for a ‘metamorality’ – a universally shared moral perspective (he suggests utilitarianism) that would create a global in-group thus facilitating cooperation.

Continue reading

Announcement: “Brave New Love” in AJOB:Neuroscience – peer commentaries due October 7

Announcement: “Brave New Love” – peer commentaries due October 7

Dear Practical Ethics readers,

The paper, “Brave new love: the threat of high-tech ‘conversion’ therapy and the bio-oppression of sexual minorities” by Brian D. Earp, Anders Sandberg, and Julian Savulescu has been accepted for publication in the American Journal of Bioethics: NeuroscienceProposals for open peer commentaries are due this Monday October 7th.

The article may be accessed here, or at the following link: http://editorial.bioethics.net. Be sure to select AJOB:Neuroscience from the drop-down menu of journals. Here is an abstract of the argument:

============================

Abstract: Our understanding of the neurochemical bases of human love and attachment, as well as of the genetic, epigenetic, hormonal, and experiential factors that conspire to shape an individual’s sexual orientation, is increasing exponentially. This research raises the vexing possibility that we may one day be equipped to modify such variables directly, allowing for the creation of “high-tech” conversion therapies or other suspect interventions. In this paper, we discuss the ethics surrounding such a possibility, and call for the development of legal and procedural safeguards for protecting vulnerable children from the application of such technology. We also consider the more difficult case of voluntary, adult “conversion” and argue that in rare cases, such attempts might be permissible under strict conditions.

============================

Open Peer Commentary articles are typically between 500-1500 words and contain no more than 10 references. A guide to writing an Open Peer Commentary is available under the Resources section “Instructions and Forms” at http://editorial.bioethics.net. AJOB:Neuroscience asks that by Monday, October 7, 2013 you submit a short summary of your proposed Open Peer Commentary (no more than 1-2 paragraphs). Please submit your proposal online via the AJOB:Neuroscience Editorial site, following the instructions provided there. They ask that you do not prepare a full commentary yet. Once they have evaluated your proposal, they will contact you via email to let you know whether or not they were able to include you on the final list of those to be asked to submit an Open Peer Commentary.

You will then have until Friday, October 25, 2013 to submit your full Open Peer Commentary.

 

Twitter, paywalls, and access to scholarship — are license agreements too restrictive?

By Brian D. Earp

Follow Brian on Twitter by clicking here.

Twitter, paywalls, and access to scholarship — are license agreements too restrictive? 

I think I may have done something unethical today. But I’m not quite sure, dear reader, so I’m enlisting your energy to help me think things through. Here’s the short story:

Someone posted a link to an interesting-looking article by Caroline Williams at New Scientist — on the “myth” that we should live and eat like cavemen in order to match our lifestyle to that of our evolutionary ancestors, and thereby maximize health. Now, I assume that when you click on the link I just gave you (unless you’re a New Scientist subscriber), you get a short little blurb from the beginning of the article and then–of course–it dissolves into an ellipsis as soon as things start to get interesting:

Our bodies didn’t evolve for lying on a sofa watching TV and eating chips and ice cream. They evolved for running around hunting game and gathering fruit and vegetables. So, the myth goes, we’d all be a lot healthier if we lived and ate more like our ancestors. This “evolutionary discordance hypothesis” was first put forward in 1985 by medic S. Boyd Eaton and anthropologist Melvin Konner …

Holy crap! The “evolutionary discordance hypothesis” is a myth? I hope not, because I’ve been using some similar ideas in a lot of my arguments about neuroenhancement recently. So I thought I should really plunge forward and read the rest of the article. Unfortunately, I don’t have a subscription to New Scientist, and when I logged into my Oxford VPN-thingy, I discovered that Oxford doesn’t have access either. Weird. What was I to do?

Since I typically have at least one eye glued to my Twitter account, it occurred to me that I could send a quick tweet around to check if anyone had the PDF and would be willing to send it to me in an email. The majority of my “followers” are fellow academics, and I’ve seen this strategy play out before — usually when someone’s institutional log-in isn’t working, or when a key article is behind a pay-wall at one of those big “bundling” publishers that everyone seems to hold in such low regard. Another tack would be to dash off an email to a couple of colleagues of mine, and I could “CC” the five or six others who seem likeliest to be New Scientist subscribers. In any case, I went for the tweet.

Sure enough, an hour or so later, a chemist friend of mine sent me a message to “check my email” and there was the PDF of the “caveman” article, just waiting to be devoured. I read it. It turns out that the “evolutionary discordance hypothesis” is basically safe and sound, although it may need some tweaking and updates. Phew. On to other things.

But then something interesting happened! Whoever it is that manages the New Scientist Twitter account suddenly shows up in my Twitter feed with a couple of carefully-worded replies to my earlier PDF-seeking hail-mary:

Continue reading

Would you hand over a moral decision to a machine? Why not? Moral outsourcing and Artificial Intelligence.

Artificial Intelligence and Human Decision-making.

Recent developments in artificial intelligence are allowing an increasing number of decisions to be passed from human to machine. Most of these to date are operational decisions – such as algorithms on the financial markets deciding what trades to make and how. However, the range of such decisions that can be computerisable are increasing, and as many operational decisions have moral consequences, they could be considered to have a moral component.

One area in which this is causing growing concern is military robotics. The degree of autonomy with which uninhabited aerial vehicles and ground robots are capable of functioning is steadily increasing. There is extensive debate over the circumstances in which robotic systems should be able to operate with a human “in the loop” or “on the loop” – and the circumstances in which a robotic system should be able to operate independently. A coalition of international NGOs recently launched a campaign to “stop killer robots”.

While there have been strong arguments raised against robotic systems being able to use lethal force against human combatants autonomously, it is becoming increasingly clear that in many circumstances in the near future the “no human in the loop” robotic system will have advantages over the “in the loop system”. Automated systems already have better perception and faster reflexes than humans in many ways, and are slowed down by the human input. The human “added value” comes from our judgement and decision-making – but these are by no means infallible, and will not always be superior to the machine’s. In June’s Centre for a New American Society (CNAS) conference, Rosa Brooks (former pentagon official, now Georgetown Law professor) put this in a provocative way:
“Our record- we’re horrible at it [making “who should live and who should die” decisions] … it seems to me that it could very easily turn out to be the case that computers are much better than we are doing. And the real ethical question would be can we ethically and lawfully not let the autonomous machines do that when they’re going to do it better than we will.” (1)

For a non-military example, consider the adaptation of IBM’s Jeopardy-winning “Watson” for use in medicine. As evidenced by IBM’s technical release this week, progress in developing these systems continues apace (shameless plug: Selmer Bringsjord, the AI researcher “putting Watson through college” will speak in Oxford about “Watson 2.0″ next month as part of the Philosophy and Theory of AI conference).

Soon we will have systems that will enter use as doctor’s aides – able to analyse the world’s medical literature to diagnose a medical problem and provide recommendations to the doctor. But it seems likely that a time will come when these thorough analyses produce recommendations that are sometimes at odds with the doctor’s recommendation – but are proven to be more accurate on average. To return to combat, we will have robotic systems that can devise and implement non-intuitive (to human) strategies that involve using lethal force, but achieve a military objective more efficiently with less loss of life. Human judgement added to the loop may prove to be an impairment.

Moral Outsourcing

At a recent academic workshop I attended on autonomy in military robotics, a speaker posed a pair of questions to test intuitions on this topic.
“Would you allow another person to make a moral decision on your behalf? If not, why not?” He asked the same pair of questions substituting “machine” for “a person”.

Regarding the first pair of questions, we all do this kind of moral outsourcing to a certain extent – allowing our peers, writers, and public figures to influence us. However, I was surprised to find I was unusual in doing this in a deliberate and systematic manner. In the same way that I rely on someone with the right skills and tools to fix my car, I deliberately outsource a wide range of moral questions to people who I know can answer then better than I can. These people tend to be better-informed on specific issues than I am, have had more time to think them through, and in some cases are just plain better at making moral assessments. I of course select for people who have a roughly similar world view to me, and from time to time do “spot tests” – digging through their reasoning to make sure I agree with it.

We each live at the centre of a spiderweb of moral decisions – some obvious, some subtle. As a consequentialist I don’t believe that “opting out” by taking the default course or ignoring many of them absolves me of responsibility. However, I just don’t have time to research, think about, and make sound morally-informed decisions about my diet, the impact of my actions on the environment, feminism, politics, fair trade, social equality – the list goes on. So I turn to people who can, and who will make as good a decision as I would in ideal circumstances (or a better one) nine times out of ten.

So Why Shouldn’t I Trust The Machine?

So to the second pair of questions:
“Would you allow a machine to make a moral decision on your behalf? If not, why not?”

It’s plausible that in the near future we will have artificial intelligence that for given, limited situations (for example: make a medical treatment decision, a resource allocation decision, or an “acquire military target” decision) is able to weigh up the facts for a and make as a decision or better than a human can 99.99% of the time – unclouded by bias, with vastly more information available to it.

So why not trust the machine?

Human decision-making is riddled with biases and inconsistencies, and can be impacted heavily by as little as fatigue, or when we last ate. For all that, our inconsistencies are relatively predictable, and have bounds. Every bias we know about can be taken into account, and corrected for to some extent. And there are limits to how insane an intelligent, balanced person’s “wrong” decision will be – even if my moral “outsourcees” are “less right” than me 1 time out of 10, there’s a limit to how bad their wrong decision will be.

This is not necessarily the case with machines. When a machine is “wrong”, it can be wrong in a far more dramatic way, with more unpredictable outcomes, than a human could.
Simple algorithms should be extremely predictable, but can make bizarre decisions in “unusual” circumstances. Consider the two simple pricing algorithms that got in a pricing war, pushing the price of a book about flies to $23 million. Or the 2010 stock market flash crash. It gets even more difficult to keep track of when evolutionary algorithms and other “learning” methods are used. Using self-modifying heuristics Douglas Lenat’s Eurisko won the US Championship of the Traveller TCS game using unorthodox, non-intuitive fleet designs. This fun youtube video shows a Super Mario-playing greedy algorithm figuring out how to make use of several hitherto-unknown game glitches to win (see 10:47).

Why should this concern us? As the decision-making processes become more complicated, and the strategies more non-intuitive, it becomes ever-harder to “spot test” if we agree with them – provided the results turn out good the vast majority of the time. The upshot is that we have to just “trust” the methods and strategies more and more. It also becomes harder to figure out how, why, and in what circumstances the machine will go wrong – and what the magnitude of the failure will be.

Even if we are outperformed 99.99% of the time, the unpredictability of the 0.01% failures may be a good reason to consider carefully what and how we morally outsource to the machine.

1. Transcript available here.
For further discussion on Brooks’s talk, see Foreign Policy Magazine articles here and here.

Cultural bias and the evaluation of medical evidence: An update on the AAP

By Brian D. Earp

Follow Brian on Twitter by clicking here.

Cultural bias and the evaluation of medical evidence: An update on the AAP

Since my article on the American Academy of Pediatrics’ recent change in policy regarding infant male circumcision was posted back in August of 2012, some interesting developments have come about. Two major critiques of the AAP documents were published in leading international journals, one in the Journal of Medical Ethics, and a second in the AAP’s very own PediatricsIn the second of these, 38 distinguished pediatricians, pediatric surgeons, urologists, medical ethicists, and heads of hospital boards and children’s health societies throughout Europe and Canada argued that there is: “Cultural Bias in the AAP’s 2012 Technical Report and Policy Statement on Male Circumcision.”

The AAP took the time to respond to this possibility in a formal reply, also published in Pediatrics earlier this year. Rather than thoughtfully addressing the specific charge of cultural bias, however, the AAP elected to boomerang the criticism, implying that their critics were themselves biased, only against circumcision. To address this interesting allegation, I have updated my original blog post. Interested readers can click here to see my analysis.

Finally, please note that articles from the Journal of Medical Ethics special issue on circumcision are (at long last) beginning to appear online. The print issue will follow shortly. Also be sure to see this recent critique of the AAP in a thoughtful book by JME contributor and medical historian Dr. Robert Darby, entitled: “The Sorcerer’s Apprentice: Why Can’t the US Stop Circumcising Boys?”

– BDE 

How to deal with double-edged technology

By Brian D. Earp

 World’s smallest drone? Or how to deal with double-edged technology 

BBC News reports that Harvard scientists have developed the world’s smallest flying robot. It’s about the size of a penny, and it moves faster than a human hand can swat. Of course, the inventors of this “diminutive flying vehicle” immediately lauded its potential for bringing good to the world:

1. “We could envision these robots being used for search-and-rescue operations to search for human survivors under collapsed buildings or [in] other hazardous environments.”

2. “They [could] be used for environmental monitoring, to be dispersed into a habitat to sense trace chemicals or other factors.”

3. They might even behave like many real insects and assist with the pollination of crops, “to function as the now-struggling honeybee populations do in supporting agriculture around the world.”

These all seem like pretty commendable uses of a new technology. Yet one can think of some “bad” uses too. The “search and rescue” version of this robot (for example) would presumably be fitted with a camera; and the prospect of a swarm of tiny, remote-controlled flying video recorders raises some obvious questions about spying and privacy. It also prompts one to wonder who will have access to these spy bugs (the U.S. Air Force has long been interested in building miniature espionage drones), and whether there will be effective regulatory strategies capable of tilting future usage more toward the search-and-rescue side of things, and away from the peep-and-record side.

Continue reading

Brief announcement: Interview about ‘love drugs’ on “Q” with Jian Ghomeshi

By Brian D. Earp

Interview announcement

This is a brief note to alert the readers of Practical Ethics that research by myself, Anders Sandberg, and Julian Savulescu on the potential therapeutic uses of “love drugs” and “anti-love drugs” has recently been featured in an interview for the national Canadian broadcast program, “Q” with Jian Ghomeshi (airing on National Public Radio in the United States).

Here is a link to the interview.

Readers may also be interested in checking out a new website, “Love in the Age of Enhancement” which collects the various academic essays, magazine articles, and media coverage of these arguments concerning the neuroenhancement of human relationships.

Continue reading

Pedophilia, Preemptive Imprisonment, and the Ethics of Predisposition

The first two weeks of 2013 were marked by a flurry of news articles considering “the new science” of pedophilia. Alan Zarembo’s article for the Los Angeles Times focused on the increasing consensus among researchers that pedophilia is a biological predisposition similar to heterosexuality or homosexuality. Rachel Aviv’s piece for The New Yorker shed light upon the practice of ‘civil commitment’ in the US, a process by which inmates may be kept in jail past their release date if a panel decides that they are at risk of molesting a child (even if there is no evidence that they have in the past). The Guardian’s Jon Henley quoted sources suggesting that perhaps some pedophilic relationships aren’t all that harmful after all. And Rush Limbaugh chimed in comparing the ‘normalization’ of pedophilia to the historical increase in the acceptance of homosexuality, suggesting that recognizing pedophilia as a sexual orientation would be tantamount to condoning child molestation.

So what does it all mean? While most people I talked to in the wake of these stories (I include myself) were fascinated by the novel scientific evidence and the compelling profiles of self-described pedophiles presented in these articles, we all seemed to have a difficult time wrapping our minds around the ethical considerations at play. Why does it matter for our moral appraisal of pedophiles whether pedophilia is innate or acquired? Is it wrong to imprison someone for a terrible crime that they have not yet committed but are at a “high risk” of committing in the future? And if we say that we can’t “blame” pedophiles for their attraction to children because it is not their “fault” – they were “born this way” – is it problematic to condemn individuals for acting upon these (and other harmful) desires if it can be shown that poor impulse control is similarly genetically predisposed? While I don’t get around to fully answering most of these questions in the following post, my aim is to tease out the highly interrelated issues underlying these questions with the goal of working towards a framework by which the moral landscape of pedophilia can be understood.  Continue reading

Turning the Camera Around: What Newtown Tells Us About Ourselves

On the morning of December 14th, 20-year old Adam Lanza opened fire within the halls of Sandy Hook Elementary School in Newtown, Connecticut, killing 20 children and six adult staff members before turning his gun on himself. In the hours that followed, journalists from every major news station in the nation inundated the tiny town, and in the days that followed, the country as a whole started down a familiar path characterized best by the plethora of ‘if only-isms’.

It began in the immediate hours following the shooting: if only we had stricter gun control laws, this wouldn’t have happened. This is perhaps an unsurprising first response in a country that represents 4.5% of the world’s population and 40% of the world’s civilian firearms.[1] Over the next few days, as a portrait of the shooter began to emerge and friends and family revealed that he was an avid gamer, a second theory surfaced in the headlines: if only our children weren’t exposed to such violent video games, this tragedy never would have occurred.[2] [3] And just in the past few days, public discourse has converged on the gunman’s mental health, the general conclusion being that if only we had better mental health services in place, this wouldn’t have happened.[4][5] (The National Rifle Association [NRA] even tried to jump on board, suggesting that “26 innocent lives might have been spared” if only we had an armed police guard in every school in America.[6] They seem to be the only ones taking themselves seriously.[7]) Continue reading

“Treating” homosexuality in minors: Protected free speech or child abuse?

By Brian D. Earp

See Brian’s most recent previous post by clicking here.

See all of Brian’s previous posts by clicking here.

Follow Brian on Twitter by clicking here.

 

“Treating” homosexuality in minors: Protected free speech or child abuse?

Should mental health providers be allowed to try to “cure” minors of their homosexuality?

Continue reading

Authors

Affiliations