Cross Post: What do sugar and climate change have in common? Misplaced scepticism of the science

Written by Professor Neil Levy, Senior Research Fellow, Uehiro Centre for Practical Ethics, University of Oxford

This article was originally published on The Conversation

Erosion of the case against sugar. Shutterstock

Why do we think that climate sceptics are irrational? A major reason is that almost none of them have any genuine expertise in climate science (most have no scientific expertise at all), yet they’re confident that they know better than the scientists. Science is hard. Seeing patterns in noisy data requires statistical expertise, for instance. Climate data is very noisy: we shouldn’t rely on common sense to analyse it. We are instead forced to use the assessment of experts. Continue reading

Guest Post: Scientists aren’t always the best people to evaluate the risks of scientific research

Written by Simon Beard, Research Associate at the Center for the Study of Existential Risk, University of Cambridge

How can we study the pathogens that will be responsible for future global pandemics before they have happened? One way is to find likely candidates currently in the wild and genetically engineer them so that they gain the traits that will be necessary for them to cause a global pandemic.

Such ‘Gain of Function’ research that produces ‘Potential Pandemic Pathogens’ (GOF-PPP for short) is highly controversial. Following some initial trails looking at what kinds of mutations were needed to make avian influenza transmissible in ferrets, a moratorium has been imposed on further research whilst the risks and benefits associated with it are investigated. Continue reading


Written by Darlei Dall’Agnol[1]


stephen hawking




Stephen Hawking has recently made two very strong declarations:

  • Philosophy is dead;
  • Artificial intelligence could spell the end of the human race.

I wonder whether there is a close connection between the two. In fact, I believe that the second will be true only if the first is. But philosophy is not dead and it may undoubtedly help us to prevent the catastrophic consequences of misusing science and technology. Thus, I will argue that it is through the enhancement of our wisdom that we can hope to avoid artificial intelligence (AI) causing the end of mankind.  Continue reading

“The medicalization of love” – podcast interview

Just out today is a podcast interview for Smart Drug Smarts between host Jesse Lawler and interviewee Brian D. Earp on “The Medicalization of Love” (title taken from a recent paper with Anders Sandberg and Julian Savulescu, available from the Cambridge Quarterly of Healthcare Ethics, here).

Below is the abstract and link to the interview:


What is love? A loaded question with the potential to lead us down multiple rabbit holes (and, if you grew up in the 90s, evoke memories of the Haddaway song). In episode #95, Jesse welcomes Brian D. Earp on board for a thought-provoking conversation about the possibilities and ethics of making biochemical tweaks to this most celebrated of human emotions. With a topic like “manipulating love,” the discussion moves between the realms of neuroscience, psychology and transhumanist philosophy. 


Earp, B. D., Sandberg, A., & Savulescu, J. (2015). The medicalization of love. Cambridge Quarterly of Healthcare Ethics, Vol. 24, No. 3, 323–336.

Psychology is not in crisis? Depends on what you mean by “crisis”

By Brian D. Earp

*Note that this article was originally published at the Huffington Post.


In the New York Times yesterday, psychologist Lisa Feldman Barrett argues that “Psychology is Not in Crisis.” She is responding to the results of a large-scale initiative called the Reproducibility Project, published in Science magazine, which appeared to show that the findings from over 60 percent of a sample of 100 psychology studies did not hold up when independent labs attempted to replicate them.

She argues that “the failure to replicate is not a cause for alarm; in fact, it is a normal part of how science works.” To illustrate this point, she gives us the following scenario:

Suppose you have two well-designed, carefully run studies, A and B, that investigate the same phenomenon. They perform what appear to be identical experiments, and yet they reach opposite conclusions. Study A produces the predicted phenomenon, whereas Study B does not. We have a failure to replicate.

Does this mean that the phenomenon in question is necessarily illusory? Absolutely not. If the studies were well designed and executed, it is more likely that the phenomenon from Study A is true only under certain conditions. The scientist’s job now is to figure out what those conditions are, in order to form new and better hypotheses to test.

She’s making a pretty big assumption here, which is that the studies we’re interested in are “well-designed” and “carefully run.” But a major reason for the so-called “crisis” in psychology — and I’ll come back to the question of just what kind of crisis we’re really talking about (see my title) — is the fact that a very large number of not-well-designed, and not-carefully-run studies have been making it through peer review for decades.

Small sample sizes, sketchy statistical procedures, incomplete reporting of experiments, and so on, have been pretty convincingly shown to be widespread in the field of psychology (and in other fields as well), leading to the publication of a resource-wastingly large percentage of “false positives” (read: statistical noise that happens to look like a real result) in the literature.

Continue reading

How can journal editors fight bias in polarized scientific communities?

By Brian D. Earp

In a recent issue of the Journal of Medical EthicsThomas Ploug and Søren Holm point out that scientific communities can sometimes get pretty polarized. This happens when two different groups of researchers consistently argue for (more or less) opposite positions on some hot-button empirical issue.

The examples they give are: debates over the merits of breast cancer screening and the advisability of prescribing statins to people at low risk of heart disease. Other examples come easily to mind. The one that pops into my head is the debate over the health benefits vs. risks of male circumcision—which I’ve covered in some detail herehereherehere, and here.

When I first starting writing about this issue, I was pretty “polarized” myself. But I’ve tried to step back over the years to look for middle ground. Once you realize that your arguments are getting too one-sided, it’s hard to go on producing them without making some adjustments. At least, it is without losing credibility — and no small measure of self-respect.

This point will become important later on.

Nota bene! According to Ploug and Holm, disagreement is not the same as polarization. Instead, polarization only happens when researchers:

(1) Begin to self-identify as proponents of a particular position that needs to be strongly defended beyond what is supported by the data, and

(2) Begin to discount arguments and data that would normally be taken as important in a scientific debate.

But wait a minute. Isn’t there something peculiar about point number (1)?

On the one hand, it’s framed in terms of self-identification, so: “I see myself as a proponent of a particular position that needs to be strongly defended.” Ok, that much makes sense. But then it makes it sound like this position-defending has to go “beyond what is supported by the data.”

But who would self-identify as someone who makes inadequately supported arguments?

We might chalk this up to ambiguous phrasing. Maybe the authors mean that (in order for polarization to be diagnosed) researchers have to self-identify as “proponents of a particular position,” while the part about “beyond the data” is what an objective third-party would say about the researchers (even if that’s not what they would say about themselves). It’s hard to know for sure.

But the issue of self-identification is going to come up again in a minute, because I think it poses a big problem for Ploug and Holm’s ultimate proposal for how to combat polarization. To see why, though, I have to say a little bit more about what their overall suggestion is in the first place.

Continue reading

Could ad hominem arguments sometimes be OK?

By Brian D. Earp

Follow Brian on Twitter by clicking here.

Could ad hominem arguments sometimes be OK? 

You aren’t supposed to make ad hominem arguments in academic papers — maybe not anywhere. To get us on the same page, here’s a quick blurb from Wikipedia:

An ad hominem (Latin for “to the man” or “to the person”), short for argumentum ad hominem, is a general category of fallacies in which a claim or argument is rejected on the basis of some irrelevant fact about the author of or the person presenting the claim or argument. Ad hominem reasoning is normally categorized as an informal fallacy, more precisely as a genetic fallacy, a subcategory of fallacies of irrelevance.

Some initial thoughts. First, there are some clear cut cases where an ad hominem argument is plainly worthless and simply distracting: it doesn’t help us understand things better; it doesn’t wend toward truth. Let’s say that a philosopher makes an argument, X, concerning (say) abortion; and her opponent points out that the philosopher is (say) a known tax cheat — an attempt to discredit her character. Useless. But let’s say that a psychologist makes an argument, Y, about race and IQ (i.e., that black people are less “intelligent” than white people), and his opponent points out that he used to be a member of the KKK. Well, it’s still useless in one sense, in that the psychologist’s prior membership in the KKK can’t by itself disprove his argument; but it does seem useful in another sense, in that it might give us at least a plausible reason to be a little bit more cautious in interpreting the psychologist’s results.

Continue reading

Why it matters whether you believe in free will

by Rebecca Roache

Follow Rebecca on Twitter

Scientific discoveries about how our behaviour is causally influenced often prompt the question of whether we have free will (for a general discussion, see here). This month, for example, the psychologist and criminologist Adrian Raine has been promoting his new book, The Anatomy of Violence, in which he argues that there are neuroscientific explanations of the behaviour of violent criminals. He argues that these explanations might be taken into account during sentencing, since they show that such criminals cannot control their violent behaviour to the same extent that (relatively) non-violent people can, and therefore that these criminals have reduced moral responsibility for their crimes. Our criminal justice system, along with our conceptions of praise and blame, and moral responsibility more generally, all presuppose that we have free will. If science can reveal it to be an illusion, some of the most fundamental features of our society are undermined.

The questions of exactly what free will is, and whether and how it can accommodate scientific discoveries about the causes of our behaviour, are primarily theoretical philosophical questions. Questions of theoretical philosophy—for example, those relating to metaphysics, epistemology, and philosophy of mind and language—are rarely viewed as highly relevant to people’s day-to-day lives (unlike questions of practical philosophy, such as those relating to ethics and morality). However, it turns out that the beliefs that people hold about free will are relevant. In the last five years, empirical evidence has linked reduced belief in free will with an increased willingness to cheat,1 increased aggression and reduced helpfulness,2 and reduced job performance.3 Even the way that the brain prepares for action differs depending on whether or not one believes in free will.4 If the results of these studies apply at a societal level, we should be very concerned about promoting the view that we do not have free will. But what can we do about it? Continue reading

Pedophilia, Preemptive Imprisonment, and the Ethics of Predisposition

The first two weeks of 2013 were marked by a flurry of news articles considering “the new science” of pedophilia. Alan Zarembo’s article for the Los Angeles Times focused on the increasing consensus among researchers that pedophilia is a biological predisposition similar to heterosexuality or homosexuality. Rachel Aviv’s piece for The New Yorker shed light upon the practice of ‘civil commitment’ in the US, a process by which inmates may be kept in jail past their release date if a panel decides that they are at risk of molesting a child (even if there is no evidence that they have in the past). The Guardian’s Jon Henley quoted sources suggesting that perhaps some pedophilic relationships aren’t all that harmful after all. And Rush Limbaugh chimed in comparing the ‘normalization’ of pedophilia to the historical increase in the acceptance of homosexuality, suggesting that recognizing pedophilia as a sexual orientation would be tantamount to condoning child molestation.

So what does it all mean? While most people I talked to in the wake of these stories (I include myself) were fascinated by the novel scientific evidence and the compelling profiles of self-described pedophiles presented in these articles, we all seemed to have a difficult time wrapping our minds around the ethical considerations at play. Why does it matter for our moral appraisal of pedophiles whether pedophilia is innate or acquired? Is it wrong to imprison someone for a terrible crime that they have not yet committed but are at a “high risk” of committing in the future? And if we say that we can’t “blame” pedophiles for their attraction to children because it is not their “fault” – they were “born this way” – is it problematic to condemn individuals for acting upon these (and other harmful) desires if it can be shown that poor impulse control is similarly genetically predisposed? While I don’t get around to fully answering most of these questions in the following post, my aim is to tease out the highly interrelated issues underlying these questions with the goal of working towards a framework by which the moral landscape of pedophilia can be understood.  Continue reading

Technology is outrunning science

It’s a common trope that our technology is outrunning our wisdom: we have great technological power, so the argument goes, but not the wisdom to use it.

Forget wisdom: technology is outrunning science! We have great technological power, but not the science to know what it does. In a recent bizarre  trial in Italy, scientists were found guilty of manslaughter for failing to predict an earthquake in L’Aquila – prompting seismologists all over the world to sign an open letter stating, basically, that science can’t predict earthquakes.

But though we can’t predict earthquakes, we can certainly cause them. Pumping out water from an aquifer, oil and gas wells, rock quarries, even dams, have all been showed to cause earthquakes – though their magnitude and their timing remain unpredictable.

Geoengineering is another example of the phenomena: we have the technological know-how to radically change the planet’s climate at relatively low cost – but lack the science to predict the extent and true impact of this radical change. Soon we may be able to build artificial minds, though whole-brain emulations or other methods,  but we can’t predict when this might happen or even the likely consequences of such a dramatically transformative technology.

The path from pure science to grubby technological implementation is traditionally seen as running in one clear direction: pure science develops ground-breaking ivory tower ideas, that eventually get taken up and transformed into useful technology, year down the line. To do this, science has to stay continually ahead of technology: we have to know more than we do. But now it’s pure science and research that have to play catch-up: we have find a way to know what we’re doing.


Subscribe Via Email

Email *