Brian D. Earp

Brian D. Earp is a Research Fellow in the Uehiro Centre for Practical Ethics at the University of Oxford. He holds degrees from Yale, Oxford, and Cambridge Universities, and studies issues in psychology, philosophy, biomedicine, and ethics.

Pedophilia and Child Sexual Abuse Are Two Different Things — Confusing Them is Harmful to Children

By Brian D. Earp (@briandavidearp)

Republican politician Roy Moore has been accused of initiating sexual contact with a 14-year-old girl when he was in his early 30s. Social media sites have since exploded with comments like these:

Continue reading

Cross Post: Machine Learning and Medical Education: Impending Conflicts in Robotic Surgery

Guest Post by Nathan Hodson 

* Please note that this article is being cross-posted from the Journal of Medical Ethics Blog 

Research in robotics promises to revolutionize surgery. The Da Vinci system has already brought the first fruits of the revolution into the operating theater through remote controlled laparoscopic (or “keyhole”) surgery. New developments are going further, augmenting the human surgeon and moving toward a future with fully autonomous robotic surgeons. Through machine learning, these robotic surgeons will likely one day supersede their makers and ultimately squeeze human surgical trainees out of operating room.

This possibility raises new questions for those building and programming healthcare robots. In their recent essay entitled “Robot Autonomy for Surgery,” Michael Yip and Nikhil Das echoed a common assumption in health robotics research: “human surgeons [will] still play a large role in ensuring the safety of the patient.” If human surgical training is impaired by robotic surgery, however—as I argue it likely will be—then this safety net would not necessarily hold.

Imagine an operating theater. The autonomous robot surgeon makes an unorthodox move. The human surgeon observer is alarmed. As the surgeon reaches to take control, the robot issues an instruction: “Step away. Based on data from every single operation performed this year, by all automated robots around the world, the approach I am taking is the best.”

Should we trust the robot? Should we doubt the human expert? Shouldn’t we play it safe—but what would that mean in this scenario? Could such a future really materialize?

Continue reading

Does Female Genital Mutilation Have Health Benefits? The Problem with Medicalizing Morality

Does Female Genital Mutilation Have Health Benefits? The Problem with Medicalizing Morality

By Brian D. Earp (@briandavidearp)

Please note: this piece was originally published in Quillette Magazine.

 

Four members of the Dawoodi Bohra sect of Islam living in Detroit, Michigan have recently been indicted on charges of female genital mutilation (FGM). This is the first time the US government has prosecuted an “FGM” case since a federal law was passed in 1996. The world is watching to see how the case turns out.

A lot is at stake here. Multiculturalism, religious freedom, the limits of tolerance; the scope of children’s—and minority group—rights; the credibility of scientific research; even the very concept of “harm.”

To see how these pieces fit together, I need to describe the alleged crime.

Continue reading

Can We Trust Research in Science and Medicine?

By Brian D. Earp  (@briandavidearp)

Readers of the Practical Ethics Blog might be interested in this series of short videos in which I discuss some of the major ongoing problems with research ethics and publication integrity in science and medicine. How much of the published literature is trustworthy? Why is peer review such a poor quality control mechanism? How can we judge whether someone is really an expert in a scientific area? What happens when empirical research gets polarized? Most of these are short – just a few minutes. Links below:

Why most published research probably is false

The politicization of science and the problem of expertise

Science’s publication bias problem – why negative results are important

Getting beyond accusations of being either “pro-science” or “anti-science”

Are we all scientific experts now? When to be skeptical about scientific claims, and when to defer to experts

Predatory open access publishers and why peer review is broken

The future of scientific peer review

Sloppy science going on at the CDC and WHO

Dogmas in science – how do they form?

Please note: this post will be cross-published with the Journal of Medical Ethics Blog.

Guest Post: Mind the accountability gap: On the ethics of shared autonomy between humans and intelligent medical devices

Guest Post by Philipp Kellmeyer

Imagine you had epilepsy and, despite taking a daily cocktail of several anti-epileptic drugs, still suffered several seizures per week, some minor, some resulting in bruises and other injuries. The source of your epileptic seizures lies in a brain region that is important for language. Therefore, your neurologist told you, epilepsy surgery – removing brain tissue that has been identified as the source of seizures in continuous monitoring with intracranial electroencephalography (iEEG) – is not viable in your case because it would lead to permanent damage to your language ability.

There is however, says your neurologist, an innovative clinical trial under way that might reduce the frequency and severity of your seizures. In this trial, a new device is implanted in your head that contains an electrode array for recording your brain activity directly from the brain surface and for applying small electric shocks to interrupt an impending seizure.

The electrode array connects wirelessly to a small computer that analyses the information from the electrodes to assess your seizure risk at any given moment in order to decide when to administer an electric shock. The neurologist informs you that trials with similar devices have achieved a reduction in the frequency of severe seizures in 50% of patients so that there would be a good chance that you benefit from taking part in the trial.

Now, imagine you decided to participate in the trial and it turns out that the device comes with two options: In one setting, you get no feedback on your current seizure risk by the device and the decision when to administer an electric shock to prevent an impending seizure is taken solely by the device.

This keeps you completely out of the loop in terms of being able to modify your behaviour according to your seizure risk and – in a sense – relegates some autonomy of decision-making to the intelligent medical device inside your head.

In the other setting, the system comes with a “traffic light” that signals your current risk level for a seizure, with green indicating a low, yellow a medium, and red a high probability of a seizure. In case of an evolving seizure, the device may additionally warn you with an alarm tone. In this scenario, you are kept in the loop and you retain your capacity to modify your behavior accordingly, for example to step from a ladder or stop riding a bike when you are “in the red.”

Continue reading

In praise of ambivalence—“young” feminism, gender identity, and free speech

By Brian D. Earp (@briandavidearp)

* Note: this article was first published online at Quillette magazine.

Introduction

Alice Dreger, the historian of science, sex researcher, activist, and author of a much-discussed book of last year, has recently called attention to the loss of ambivalence as an acceptable attitude in contemporary politics and beyond. “Once upon a time,” she writes, “we were allowed to feel ambivalent about people. We were allowed to say, ‘I like what they did here, but that bit over there doesn’t thrill me so much.’ Those days are gone. Today the rule is that if someone—a scientist, a writer, a broadcaster, a politician—does one thing we don’t like, they’re dead to us.”

I’m going to suggest that this development leads to another kind of loss: the loss of our ability to work together, or better, learn from each other, despite intense disagreement over certain issues. Whether it’s because our opponent hails from a different political party, or voted differently on a key referendum, or thinks about economics or gun control or immigration or social values—or whatever—in a way we struggle to comprehend, our collective habit of shouting at each other with fingers stuffed in our ears has reached a breaking point.

It’s time to bring ambivalence back. Continue reading

What is the relationship between science and morality?

Quick announcement: A podcast interview between Brian D. Earp (a.k.a. myself) and J. J. Chipchase for Naturalistic Philosophy has just been released: we talk about the relationship between science and morality, the is/ought distinction, free will, the replication crisis in science and medicine, problems with peer review, bullshit in academia, and Sam Harris’s The Moral Landscape, among other things. Check it out here:

http://naturalisticphilosophy.com/2016/02/17/informal-hour-ep-2-science-and-morality-the-moral-landscape-with-brian-d-earp/

The unbearable asymmetry of bullshit

By Brian D. Earp (@briandavidearp)

* Note: this article was first published online at Quillette magazine. The official version is forthcoming in the HealthWatch Newsletter; see http://www.healthwatch-uk.org/.

Introduction

Science and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent concerns about sloppy research, small sample sizes, and challenges in replicating major findings—concerns I share and which I have written about at length — I still believe that the scientific method is the best available tool for getting at empirical truth. Or to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst tool, except for all the rest.

Scientists are people too

In other words, science is flawed. And scientists are people too. While it is true that most scientists — at least the ones I know and work with — are hell-bent on getting things right, they are not therefore immune from human foibles. If they want to keep their jobs, at least, they must contend with a perverse “publish or perish” incentive structure that tends to reward flashy findings and high-volume “productivity” over painstaking, reliable research. On top of that, they have reputations to defend, egos to protect, and grants to pursue. They get tired. They get overwhelmed. They don’t always check their references, or even read what they cite. They have cognitive and emotional limitations, not to mention biases, like everyone else.

At the same time, as the psychologist Gary Marcus has recently put it, “it is facile to dismiss science itself. The most careful scientists, and the best science journalists, realize that all science is provisional. There will always be things that we haven’t figured out yet, and even some that we get wrong.” But science is not just about conclusions, he argues, which are occasionally (or even frequently) incorrect. Instead, “It’s about a methodology for investigation, which includes, at its core, a relentless drive towards questioning that which came before.” You can both “love science,” he concludes, “and question it.”

I agree with Marcus. In fact, I agree with him so much that I would like to go a step further: if you love science, you had better question it, and question it well, so it can live up to its potential.

And it is with that in mind that I bring up the subject of bullshit.

Continue reading

Should vegans eat meat to be ethically consistent? And other moral puzzles from the latest issue of the Journal of Practical Ethics

Should vegans eat meat to be ethically consistent? And other moral puzzles from the latest issue of the Journal of Practical Ethics

By Brian D. Earp (@briandavidearp)

The latest issue of The Journal of Practical Ethics has just been published online, and it includes several fascinating essays (see the abstracts below). In this blog post, I’d like to draw attention to one of them in particular, because it seemed to me to be especially creative and because it was written by an undergraduate student! The essay – “How Should Vegans Live?” – is by Oxford student Xavier Cohen. I had the pleasure of meeting Xavier several months ago when he presented an earlier draft of his essay at a lively competition in Oxford: he and several others were finalists for the Oxford Uehiro Prize in Practical Ethics, for which I was honored to serve as one of the judges.

In a nutshell, Xavier argues that ethical vegans – that is, vegans who refrain from eating animal products specifically because they wish to reduce harm to animals – may actually be undermining their own aims. This is because, he argues, many vegans are so strict about the lifestyle they adopt (and often advocate) that they end up alienating people who might otherwise be willing to make less-drastic changes to their behavior that would promote animal welfare overall. Moreover, by focusing too narrowly on the issue of directly refraining from consuming animal products, vegans may fail to realize how other actions they take may be indirectly harming animals, perhaps even to a greater degree.

Continue reading

ANNOUNCEMENT: Journal of Medical Ethics now accepting longer papers

Please note: this blog is was first published at the Journal of Medical Ethics Blog.

The Journal of Medical Ethics is pleased to announce the addition of a new article type – Extended Essays – that will allow authors up to 7,000 words to provide an in-depth analysis of their chosen topic.

In an interview, Associate Editor Tom Douglas said the new category was created “in recognition of the fact that some topics warrant sustained and nuanced analysis of a sort that can’t be laid out in less than 3,500 words.”

He went on to say that at the Journal of Medical Ethics “we don’t want to miss out on the best papers in medical ethics, many of which currently get sent elsewhere simply because of our strict word limits.”

Continue reading

Authors

Subscribe Via Email

Affiliations