Epistemic Ethics

Can We Trust Research in Science and Medicine?

By Brian D. Earp  (@briandavidearp)

Readers of the Practical Ethics Blog might be interested in this series of short videos in which I discuss some of the major ongoing problems with research ethics and publication integrity in science and medicine. How much of the published literature is trustworthy? Why is peer review such a poor quality control mechanism? How can we judge whether someone is really an expert in a scientific area? What happens when empirical research gets polarized? Most of these are short – just a few minutes. Links below:

Why most published research probably is false

The politicization of science and the problem of expertise

Science’s publication bias problem – why negative results are important

Getting beyond accusations of being either “pro-science” or “anti-science”

Are we all scientific experts now? When to be skeptical about scientific claims, and when to defer to experts

Predatory open access publishers and why peer review is broken

The future of scientific peer review

Sloppy science going on at the CDC and WHO

Dogmas in science – how do they form?

Please note: this post will be cross-published with the Journal of Medical Ethics Blog.

The non-identity problem of professional philosophers

By Charles Foster

Philosophers have a non-identity problem. It is that they are not identified as relevant by the courts. This, in an age where funding and preferment are often linked to engagement with the non-academic world, is a worry.

This irrelevance was brutally demonstrated in an English Court of Appeal case,  (‘the CICA case’) the facts of which were a tragic illustration of the non-identity problem. Continue reading

Using AI to Predict Criminal Offending: What Makes it ‘Accurate’, and What Makes it ‘Ethical’.

Jonathan Pugh

Tom Douglas


The Durham Police force plans to use an artificial intelligence system to inform decisions about whether or not to keep a suspect in custody.

Developed using data collected by the force, The Harm Assessment Risk Tool (HART) has already undergone a 2 year trial period to monitor the accuracy of the tool. Over the trial period, predictions of low risk were accurate 98% of the time, whilst predictions of high risk were accurate 88% of the time, according to media reports. Whilst HART has not so far been used to inform custody sergeants’ decisions during this trial period, the police force now plans to take the system live.

Given the high stakes involved in the criminal justice system, and the way in which artificial intelligence is beginning to surpass human decision-making capabilities in a wide array of contexts, it is unsurprising that criminal justice authorities have sought to harness AI. However, the use of algorithmic decision-making in this context also raises ethical issues. In particular, some have been concerned about the potentially discriminatory nature of the algorithms employed by criminal justice authorities.

These issues are not new. In the past, offender risk assessment often relied heavily on psychiatrists’ judgements. However, partly due to concerns about inconsistency and poor accuracy, criminal justice authorities now already use algorithmic risk assessment tools. Based on studies of past offenders, these tools use forensic history, mental health diagnoses, demographic variables and other factors to produce a statistical assessment of re-offending risk.

Beyond concerns about discrimination, algorithmic risk assessment tools raise a wide range of ethical questions, as we have discussed with colleagues in the linked paper. Here we address one that it is particularly apposite with respect to HART: how should we balance the conflicting moral values at stake in deciding the kind of accuracy we want such tools to prioritise?

Continue reading

In praise of ambivalence—“young” feminism, gender identity, and free speech

By Brian D. Earp (@briandavidearp)

* Note: this article was first published online at Quillette magazine.


Alice Dreger, the historian of science, sex researcher, activist, and author of a much-discussed book of last year, has recently called attention to the loss of ambivalence as an acceptable attitude in contemporary politics and beyond. “Once upon a time,” she writes, “we were allowed to feel ambivalent about people. We were allowed to say, ‘I like what they did here, but that bit over there doesn’t thrill me so much.’ Those days are gone. Today the rule is that if someone—a scientist, a writer, a broadcaster, a politician—does one thing we don’t like, they’re dead to us.”

I’m going to suggest that this development leads to another kind of loss: the loss of our ability to work together, or better, learn from each other, despite intense disagreement over certain issues. Whether it’s because our opponent hails from a different political party, or voted differently on a key referendum, or thinks about economics or gun control or immigration or social values—or whatever—in a way we struggle to comprehend, our collective habit of shouting at each other with fingers stuffed in our ears has reached a breaking point.

It’s time to bring ambivalence back. Continue reading

The reproducibility problem and the status of bioethics

There is a long overdue crisis of confidence in the biological and medical sciences. It would be nice – though perhaps rather ambitious – to think that it could transmute into a culture of humility.

A recent comment in Nature observes that: ‘An unpublished 2015 survey by the American Society for Cell Biology found that more than two-thirds of respondents had on at least one occasion been unable to reproduce published results. Biomedical researchers from drug companies have reported that one-quarter or fewer of high-profile papers are reproducible.’

Reproducibility of results is one of the girders underpinning conventional science. The Nature article acknowledges this: it is accompanied by a cartoon showing the crumbling edifice of ‘Robust Science.’

As the unwarranted confidence of scientists teeters and falls, what will – and what should – happen to bioethics?

Continue reading

Don’t write evil algorithms

Google is said to have dropped the famous “Don’t be evil” slogan. Actually, it is the holding company Alphabet that merely wants employees to “do the right thing”. Regardless of what one thinks about the actual behaviour and ethics of Google, it seems that it got one thing right early on: a recognition that it was moving in a morally charged space.

Google is in many ways an algorithm company: it was founded on PageRank, a clever algorithm for finding relevant web pages, scaled up thanks to MapReduce algorithms, use algorithms for choosing adverts, driving cars and selecting nuances of blue. These algorithms have large real world effects, and the way they function and are used matters morally.

Can we make and use algorithms more ethically?

Continue reading

What’s Wrong With Giving Treatments That Don’t Work: A Social Epistemological Argument.

Let us suppose we have a treatment and we want to find out if it works. Call this treatment drug X. While we have observational data that it works—that is, patients say it works or, that it appears to work given certain tests—observational data can be misleading. As Edzard Ernst writes:

Whenever a patient or a group of patients receive a medical treatment and subsequently experience improvements, we automatically assume that the improvement was caused by the intervention. This logical fallacy can be very misleading […] Of course, it could be the treatment—but there are many other possibilities as well. Continue reading

Plausibility and Same-Sex Marriage

In philosophical discussions, we bring up the notion of plausibility a lot.  “That’s implausible” is a common form of objection, while the converse “That’s plausible” is a common way of offering a sort of cautious sympathy with an argument or claim.  But what exactly do we mean when we claim something is plausible or implausible, and what implications do such claims have?  This question was, for me, most recently prompted by a recent pair of blog posts by Justin Weinberg over at Daily Nous on same-sex marriage.  In the posts and discussion, Weinberg appears sympathetic to an interesting pedagogical principle: instructors may legitimately exclude, discount or dismiss from discussion positions they take to be implausible.*  Further, opposition same-sex marriage is taken to be such an implausible position and thus excludable/discountable/dismissable from classroom debate.  Is this a legitimate line of thought?  I’m inclined against it, and will try to explain why in this post.**  Continue reading

Limiting the damage from cultures in collision

A Man in Black has a readable twitter essay about the role of chan culture in gamergate, and how the concepts of identity and debate inside a largish subculture can lead to an amazing uproar when they clash with outside cultures.

A brief recap: the Gamergate Controversy was/is a fierce culture war originating in the video gaming community in August 2014 but soon ensnaring feminists, journalists, webcomics, discussion sites, political pundits, Intel… – essentially anybody touching this tar-baby of controversy, regardless of whether they understood it or not. It has everything: media critique, feminism, sexism, racism, sealioning, cyberbullying, doxing, death threats, wrecked careers: you name it. From an outside perspective it has been a train wreck hard to look away from. Rarely have a debate flared up so quickly, involved so many, and generated so much vituperation. If this is the future of broad debates our civilization is doomed.

This post is not so much about the actual content of the controversy but the point made by A Man in Black: one contributing factor to the disaster has been that a fairly large online subculture has radically divergent standards of debate and identity, and when it got into contact with the larger world chaos erupted. How should we handle this? Continue reading

Lying to children

A study published this month shows that school-aged children are more likely to lie to an adult if that adult had recently lied to them. The British Psychological Society’s Research Digest summarizes the study here.

Hays and Carver took school-aged (and preschool-aged) children and assigned them to one of two experimental conditions. In the first condition – the lie condition – the child was told that there was a large bowl of sweets in the experiment room when in fact there was no such bowl. Continue reading


Subscribe Via Email