philosophy

“The Medicalization of Love” – call for peer commentaries – DUE SEPT 1

Announcement: 

The paper, “The Medicalization of Love” by Brian D. Earp, Anders Sandberg, and Julian Savulescu, has been accepted for publication at the Cambridge Quarterly of Healthcare Ethics. Scholars interested in submitting a short reply paper or peer commentary are encouraged to contact the editor, Tomi Kushner, at the email address listed here.

The final deadline for commentaries/ papers is September 1st. The abstract for the paper is below; the accepted manuscript is available at this link. Inquiries to the editor should be sent as soon as possible.

Abstract 

Pharmaceuticals or other emerging technologies could be used to enhance (or diminish) feelings of lust, attraction, and attachment in adult romantic partnerships. While such interventions could conceivably be used to promote individual (and couple) well-being, their widespread development and/or adoption might lead to “medicalization” of human love and heartache—for some, a source of serious concern. In this essay, we argue that the “medicalization of love” need not necessarily be problematic, on balance, but could plausibly be expected to have either good or bad consequences depending upon how it unfolds. By anticipating some of the specific ways in which these technologies could yield unwanted outcomes, bioethicists and others can help direct the course of love’s “medicalization”—should it happen to occur—more toward the “good” side than the “bad.”

Here is the link to the accepted manuscript.

* image from http://www.metalsucks.net/2014/02/16/sunday-lurve/.

Practical Ethics and Philosophy

It is now quite common to draw distinctions between three types of philosophical ethics. Practical ethics is meant to concern substantive moral issues facing many of us each day, such as abortion or climate change. Continue reading

Singularity Summit: How we’re predicting AI

When will we have proper AI? The literature is full of answers to this question, as confident as they are contradictory. In a talk given at the Singularity Institute in San Francisco, I analyse these prediction from a theoretical standpoint (should we even expect anyone to have good AI predictions at all?) and a practical one (do the predictions made look as if they have good information behind them?). I conclude that we should not put our trust in timeline predictions, but that some philosophical predictions seem surprisingly effective – but that in all cases, we should increase our uncertainties and our error bars. If someone predicts the arrival of AI at some date with great confidence, we have every reason to think they’re completely wrong.

But this doesn’t make our own opinions any better, of course – your gut feeling is as good as any expert’s; which is to say, not any good at all.

Many thanks to the Future of Humanity Institute, the Oxford Martin School, the Singularity Institute, and my co-author Kaj Sotala. More details of the approach can be found online at http://lesswrong.com/lw/e36/ai_timeline_predictions_are_we_getting_better/ or at http://lesswrong.com/lw/e79/ai_timeline_prediction_data/

How to be a high impact philosopher, part II

In a previous post, I discussed how, as a philosopher, one should decide on a research areas.  I suggested that one method was to work out what are potentially the biggest problems the world faces, work out what the crucial normative consideration are, and then work on those areas.  Call that the top-down method: starting with the problem, and working backwards to the actions one should take.

There’s a second method for high impact philosophy, however.  Let’s call it the bottom-up method.

  1. Begin by asking ‘which are the biggest decisions that one typically makes in life?’
  2. Then ask: ‘What are the crucial normative considerations that might affect how I should make those decisions?’
  3. Then figure out which of these crucial considerations is most likely to produce an action-relevant outcome given your marginal research time.
  4. Then work on that topic!

As in my previous post, I’ll go through each step in turn.

Continue reading

Sam Harris is wrong about science and morality

By Brian Earp (Follow Brian on Twitter by clicking here.)

WATCH MY EXCHANGE WITH SAM HARRIS AT OXFORDON YOUTUBE HERE.

I just finished a booklet by “New Atheist” Sam Harris — on lying — and I plan to write about it in the coming days. But I want to dig up an older Harris book, The Moral Landscape. Why? Because it still makes me grimace.

I say “still” because I read the book months ago. I just haven’t yet vented my bafflement. Permit me to gripe, then, about Harris’ (aging) “bold new” claim — presented in his book — that science can “determine human values” or “tell us what’s objectively true about morality” or “give us answers about right and wrong” or however else you package this fiction.

In his new book (the one about lying) Harris says, in effect, you should never, ever, do it — yet his pretense in The Moral Landscape to be revolutionizing moral philosophy seems to me the very height of dishonesty. What he actually does in his book is plain old secular moral reasoning — and not very well — but he claims he’s using science to decide right from wrong. That Harris could be naive enough to think he’s really bridged the famous “is/ought” chasm seems incredible, and so I submit that he’s exaggerating* to sell books. Shame on him.

*A previous version of this post had the word “lying” here, but I was told that my rhetorical flourish might be interpreted as libel. I hope “exaggerating” is sufficiently safe. Now onward to my argument:

Continue reading

Authors

Affiliations