The paper, “The Medicalization of Love” by Brian D. Earp, Anders Sandberg, and Julian Savulescu, has been accepted for publication at the Cambridge Quarterly of Healthcare Ethics. Scholars interested in submitting a short reply paper or peer commentary are encouraged to contact the editor, Tomi Kushner, at the email address listed here.
The final deadline for commentaries/ papers is September 1st. The abstract for the paper is below; the accepted manuscript is available at this link. Inquiries to the editor should be sent as soon as possible.
Pharmaceuticals or other emerging technologies could be used to enhance (or diminish) feelings of lust, attraction, and attachment in adult romantic partnerships. While such interventions could conceivably be used to promote individual (and couple) well-being, their widespread development and/or adoption might lead to “medicalization” of human love and heartache—for some, a source of serious concern. In this essay, we argue that the “medicalization of love” need not necessarily be problematic, on balance, but could plausibly be expected to have either good or bad consequences depending upon how it unfolds. By anticipating some of the specific ways in which these technologies could yield unwanted outcomes, bioethicists and others can help direct the course of love’s “medicalization”—should it happen to occur—more toward the “good” side than the “bad.”
* image from http://www.metalsucks.net/2014/02/16/sunday-lurve/.
Things I’ve learned (so far) about how to do practical ethics
I had the opportunity, a few months back, to look through some old poems I’d written in high school. Some, I thought, were pretty good. Others I remembered thinking were good when I wrote them, but now they seem embarrassingly bad: pseudo-profound, full of clichés, marked by empty rhetoric instead of meaningful content. I’ve had a similar experience today with my collection of articles here at the Practical Ethics blog. And Oh, the things I have learned!
Here are just a few of the lessons that have altered my thinking, or otherwise informed my views about “doing” practical ethics — particularly in a public-engagement context — since my very first blog post appeared in 2011:
When will we have proper AI? The literature is full of answers to this question, as confident as they are contradictory. In a talk given at the Singularity Institute in San Francisco, I analyse these prediction from a theoretical standpoint (should we even expect anyone to have good AI predictions at all?) and a practical one (do the predictions made look as if they have good information behind them?). I conclude that we should not put our trust in timeline predictions, but that some philosophical predictions seem surprisingly effective – but that in all cases, we should increase our uncertainties and our error bars. If someone predicts the arrival of AI at some date with great confidence, we have every reason to think they’re completely wrong.
But this doesn’t make our own opinions any better, of course – your gut feeling is as good as any expert’s; which is to say, not any good at all.
Many thanks to the Future of Humanity Institute, the Oxford Martin School, the Singularity Institute, and my co-author Kaj Sotala. More details of the approach can be found online at http://lesswrong.com/lw/e36/ai_timeline_predictions_are_we_getting_better/ or at http://lesswrong.com/lw/e79/ai_timeline_prediction_data/
In a previous post, I discussed how, as a philosopher, one should decide on a research areas. I suggested that one method was to work out what are potentially the biggest problems the world faces, work out what the crucial normative consideration are, and then work on those areas. Call that the top-down method: starting with the problem, and working backwards to the actions one should take.
There’s a second method for high impact philosophy, however. Let’s call it the bottom-up method.
- Begin by asking ‘which are the biggest decisions that one typically makes in life?’
- Then ask: ‘What are the crucial normative considerations that might affect how I should make those decisions?’
- Then figure out which of these crucial considerations is most likely to produce an action-relevant outcome given your marginal research time.
- Then work on that topic!
As in my previous post, I’ll go through each step in turn.