Does religion deserve a place in secular medicine?

By Brian D. Earp

The latest issue of the Journal of Medical Ethics is out, and in it, Professor Nigel Biggar—an Oxford theologian—argues that “religion” should have a place in secular medicine (click here for a link to the article).

Some people will feel a shiver go down their spines—and not only the non-religious. After all, different religions require different things, and sometimes they come to opposite conclusions. So whose religion, exactly, does Professor Biggar have in mind, and what kind of “place” is he trying to make a case for?

Continue reading

On the supposed distinction between culture and religion: A brief comment on Sir James Munby’s decision in the matter of B and G (children)

On the supposed distinction between culture and religion: A brief comment on Sir James Munby’s decision in the matter of B and G (children)

By Brian D. Earp (@briandavidearp)


What is the difference between ‘culture’ and ‘religion’ … ? From a legal standpoint, this question is important: practices which may be described as being ‘religious’ in nature are typically afforded much greater protection from interference by the state than those that are understood as being ‘merely’ cultural. One key area in which this distinction is commonly drawn is with respect to the non-therapeutic alterations of children’s genitals. When such alteration is done to female children, it is often said to be a ‘cultural’ practice that does not deserve legal protection; whereas, when it is done to male children, it is commonly said to be a ‘religious’ practice – at least for some groups – and must therefore not be restricted (much less forbidden) by law.

Is this a valid distinction?

Continue reading

“The Medicalization of Love” – call for peer commentaries – DUE SEPT 1


The paper, “The Medicalization of Love” by Brian D. Earp, Anders Sandberg, and Julian Savulescu, has been accepted for publication at the Cambridge Quarterly of Healthcare Ethics. Scholars interested in submitting a short reply paper or peer commentary are encouraged to contact the editor, Tomi Kushner, at

The final deadline for commentaries/ papers is September 1st. The abstract for the paper is below; the accepted manuscript is available at this link. Inquiries to the editor should be sent as soon as possible.


Pharmaceuticals or other emerging technologies could be used to enhance (or diminish) feelings of lust, attraction, and attachment in adult romantic partnerships. While such interventions could conceivably be used to promote individual (and couple) well-being, their widespread development and/or adoption might lead to “medicalization” of human love and heartache—for some, a source of serious concern. In this essay, we argue that the “medicalization of love” need not necessarily be problematic, on balance, but could plausibly be expected to have either good or bad consequences depending upon how it unfolds. By anticipating some of the specific ways in which these technologies could yield unwanted outcomes, bioethicists and others can help direct the course of love’s “medicalization”—should it happen to occur—more toward the “good” side than the “bad.”

Here is the link to the accepted manuscript.

* image from

Practical Ethics and Philosophy

It is now quite common to draw distinctions between three types of philosophical ethics. Practical ethics is meant to concern substantive moral issues facing many of us each day, such as abortion or climate change. Continue reading

Singularity Summit: How we’re predicting AI

When will we have proper AI? The literature is full of answers to this question, as confident as they are contradictory. In a talk given at the Singularity Institute in San Francisco, I analyse these prediction from a theoretical standpoint (should we even expect anyone to have good AI predictions at all?) and a practical one (do the predictions made look as if they have good information behind them?). I conclude that we should not put our trust in timeline predictions, but that some philosophical predictions seem surprisingly effective – but that in all cases, we should increase our uncertainties and our error bars. If someone predicts the arrival of AI at some date with great confidence, we have every reason to think they’re completely wrong.

But this doesn’t make our own opinions any better, of course – your gut feeling is as good as any expert’s; which is to say, not any good at all.

Many thanks to the Future of Humanity Institute, the Oxford Martin School, the Singularity Institute, and my co-author Kaj Sotala. More details of the approach can be found online at or at

How to be a high impact philosopher, part II

In a previous post, I discussed how, as a philosopher, one should decide on a research areas.  I suggested that one method was to work out what are potentially the biggest problems the world faces, work out what the crucial normative consideration are, and then work on those areas.  Call that the top-down method: starting with the problem, and working backwards to the actions one should take.

There’s a second method for high impact philosophy, however.  Let’s call it the bottom-up method.

  1. Begin by asking ‘which are the biggest decisions that one typically makes in life?’
  2. Then ask: ‘What are the crucial normative considerations that might affect how I should make those decisions?’
  3. Then figure out which of these crucial considerations is most likely to produce an action-relevant outcome given your marginal research time.
  4. Then work on that topic!

As in my previous post, I’ll go through each step in turn.

Continue reading

Sam Harris is wrong about science and morality

By Brian Earp (Follow Brian on Twitter by clicking here.)


I just finished a booklet by “New Atheist” Sam Harris — on lying — and I plan to write about it in the coming days. But I want to dig up an older Harris book, The Moral Landscape, so that I may express my hitherto un-expressed puzzlement about Harris’ (aging) “bold new” claim — presented in this book — that science can “determine human values” or “tell us what’s objectively true about morality” or “give us answers about right and wrong,” and the like.

In his new book (the one about lying) Harris says, in effect, you should never, ever, do it — yet his pretense in The Moral Landscape to be revolutionizing moral philosophy seems to me the very height of dishonesty. What he actually does in his book is plain old secular moral reasoning — as non-religious philosophers have been doing for a very long time — but he claims that he’s using science to decide right from wrong. That Harris could be naive enough to think he’s really bridged the famous “is/ought” chasm seems unlikely (Harris is a very smart writer and researcher, and I tend to like a lot of what he publishes), and so I submit that he’s exaggerating* to sell books. Shame on him (or his publisher).

*A previous version of this post had the word “lying” here, but I was told that my rhetorical flourish might be interpreted as libel. I hope “exaggerating” is sufficiently safe. Now onward to my argument:

Continue reading