Written by Professor Neil Levy, Senior Research Fellow, Uehiro Centre for Practical Ethics, University of Oxford
This article was originally published on The Conversation
Why do we think that climate sceptics are irrational? A major reason is that almost none of them have any genuine expertise in climate science (most have no scientific expertise at all), yet they’re confident that they know better than the scientists. Science is hard. Seeing patterns in noisy data requires statistical expertise, for instance. Climate data is very noisy: we shouldn’t rely on common sense to analyse it. We are instead forced to use the assessment of experts. Continue reading
* Note: this article was first published online at Quillette magazine.
Alice Dreger, the historian of science, sex researcher, activist, and author of a much-discussed book of last year, has recently called attention to the loss of ambivalence as an acceptable attitude in contemporary politics and beyond. “Once upon a time,” she writes, “we were allowed to feel ambivalent about people. We were allowed to say, ‘I like what they did here, but that bit over there doesn’t thrill me so much.’ Those days are gone. Today the rule is that if someone—a scientist, a writer, a broadcaster, a politician—does one thing we don’t like, they’re dead to us.”
I’m going to suggest that this development leads to another kind of loss: the loss of our ability to work together, or better, learn from each other, despite intense disagreement over certain issues. Whether it’s because our opponent hails from a different political party, or voted differently on a key referendum, or thinks about economics or gun control or immigration or social values—or whatever—in a way we struggle to comprehend, our collective habit of shouting at each other with fingers stuffed in our ears has reached a breaking point.
It’s time to bring ambivalence back. Continue reading
Written by Simon Beard, Research Associate at the Center for the Study of Existential Risk, University of Cambridge
How can we study the pathogens that will be responsible for future global pandemics before they have happened? One way is to find likely candidates currently in the wild and genetically engineer them so that they gain the traits that will be necessary for them to cause a global pandemic.
Such ‘Gain of Function’ research that produces ‘Potential Pandemic Pathogens’ (GOF-PPP for short) is highly controversial. Following some initial trails looking at what kinds of mutations were needed to make avian influenza transmissible in ferrets, a moratorium has been imposed on further research whilst the risks and benefits associated with it are investigated. Continue reading
Science and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent concerns about sloppy research, small sample sizes, and challenges in replicating major findings—concerns I share and which I have written about at length — I still believe that the scientific method is the best available tool for getting at empirical truth. Or to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst tool, except for all the rest.
Scientists are people too
In other words, science is flawed. And scientists are people too. While it is true that most scientists — at least the ones I know and work with — are hell-bent on getting things right, they are not therefore immune from human foibles. If they want to keep their jobs, at least, they must contend with a perverse “publish or perish” incentive structure that tends to reward flashy findings and high-volume “productivity” over painstaking, reliable research. On top of that, they have reputations to defend, egos to protect, and grants to pursue. They get tired. They get overwhelmed. They don’t always check their references, or even read what they cite. They have cognitive and emotional limitations, not to mention biases, like everyone else.
At the same time, as the psychologist Gary Marcus has recently put it, “it is facile to dismiss science itself. The most careful scientists, and the best science journalists, realize that all science is provisional. There will always be things that we haven’t figured out yet, and even some that we get wrong.” But science is not just about conclusions, he argues, which are occasionally (or even frequently) incorrect. Instead, “It’s about a methodology for investigation, which includes, at its core, a relentless drive towards questioning that which came before.” You can both “love science,” he concludes, “and question it.”
I agree with Marcus. In fact, I agree with him so much that I would like to go a step further: if you love science, you had better question it, and question it well, so it can live up to its potential.
And it is with that in mind that I bring up the subject of bullshit.
Dr Christopher Gyngell (Oxford) comments on the HFEA’s decision to give green light to UK researchers to genetically modify human embryos for research. A clear analysis of the most common concerns, and a suggestion for what direction the debate should take.
Written by Dr Chris Gyngell, Dr Tom Douglas and Professor Julian Savulescu
A crucial international summit on gene editing continues today in Washington DC. Organised by the US National Academy of Sciences, National Academy of Medicine, the Chinese Academy of Sciences, and the U.K.’s Royal Society, the summit promises to be a pivotal point in the history of the gene editing technologies.
Gene editing (GE) is a truly revolutionary technology, potentially allowing the genetic bases of life to be manipulated at will. It has already been used to create malaria-fighting mosquitoes, drought resistant wheat, hornless cows and cancer killing immune cells. All this despite the fact GE only become widely used in the past few years. The potential applications of GE in a decade are difficult to imagine. It may transform the food we eat, the animals we farm, and the way we battle disease. Continue reading
Just out today is a podcast interview for Smart Drug Smarts between host Jesse Lawler and interviewee Brian D. Earp on “The Medicalization of Love” (title taken from a recent paper with Anders Sandberg and Julian Savulescu, available from the Cambridge Quarterly of Healthcare Ethics, here).
Below is the abstract and link to the interview:
What is love? A loaded question with the potential to lead us down multiple rabbit holes (and, if you grew up in the 90s, evoke memories of the Haddaway song). In episode #95, Jesse welcomes Brian D. Earp on board for a thought-provoking conversation about the possibilities and ethics of making biochemical tweaks to this most celebrated of human emotions. With a topic like “manipulating love,” the discussion moves between the realms of neuroscience, psychology and transhumanist philosophy.
Earp, B. D., Sandberg, A., & Savulescu, J. (2015). The medicalization of love. Cambridge Quarterly of Healthcare Ethics, Vol. 24, No. 3, 323–336.
*Note that this article was originally published at the Huffington Post.
In the New York Times yesterday, psychologist Lisa Feldman Barrett argues that “Psychology is Not in Crisis.” She is responding to the results of a large-scale initiative called the Reproducibility Project, published in Science magazine, which appeared to show that the findings from over 60 percent of a sample of 100 psychology studies did not hold up when independent labs attempted to replicate them.
She argues that “the failure to replicate is not a cause for alarm; in fact, it is a normal part of how science works.” To illustrate this point, she gives us the following scenario:
Suppose you have two well-designed, carefully run studies, A and B, that investigate the same phenomenon. They perform what appear to be identical experiments, and yet they reach opposite conclusions. Study A produces the predicted phenomenon, whereas Study B does not. We have a failure to replicate.
Does this mean that the phenomenon in question is necessarily illusory? Absolutely not. If the studies were well designed and executed, it is more likely that the phenomenon from Study A is true only under certain conditions. The scientist’s job now is to figure out what those conditions are, in order to form new and better hypotheses to test.
She’s making a pretty big assumption here, which is that the studies we’re interested in are “well-designed” and “carefully run.” But a major reason for the so-called “crisis” in psychology — and I’ll come back to the question of just what kind of crisis we’re really talking about (see my title) — is the fact that a very large number of not-well-designed, and not-carefully-run studies have been making it through peer review for decades.
Small sample sizes, sketchy statistical procedures, incomplete reporting of experiments, and so on, have been pretty convincingly shown to be widespread in the field of psychology (and in other fields as well), leading to the publication of a resource-wastingly large percentage of “false positives” (read: statistical noise that happens to look like a real result) in the literature.
Steven Pinker has recently written an op-ed questioning the contribution of bioethics to the safe and efficient regulation of research. This has been widely misinterpreted and criticised, though Alice Dreger has written a recent accurate blog in support of Pinker. Pinker provocatively said that bioethics should get out of the way of research. This has been interpreted to mean that we should give up ethics review of research. Nobody, not me, and not Steven Pinker, thinks we should abandon ethical review of research. He actually says, ” Of course, individuals must be protected from identifiable harm, but we already have ample safeguards for the safety and informed consent of patients and research subjects.” Pinker is objecting to the unnecessary, unproductive obstruction that much bioethics represents to good research and regulation.
I largely agree with him and have said as much myself over the years. I recently wrote a piece for the anniversary issue of the JME arguing as much. I applaud him for trying to generate some self-reflection in the field.
By Daniel K. Sokol
Daniel Sokol, PhD, is a bioethicist and lawyer at 12 King’s Bench Walk, London. He has sat on several ethics committees, including the UK’s Ministry of Defence’s Research Ethics Committee.
In a recent Opinion piece in the Boston Globe, Professor Steven Pinker made the surprising suggestion that the primary moral goal of today’s bioethics should be to “get out of the way”. “A truly ethical bioethics”, he argued, “should not bog down research in red tape, moratoria or threats of prosecution”.
This bold assertion no doubt echoes the thoughts of many scientists whose research requires the approval of an ethics review committee before springing to life. As a PhD student many years ago, I experienced first hand the frustrations of the tedious review process. I spent hours drafting the protocol, revisions and responding to the Committee’s questions, time I would have preferred to spend conducting research. While a popular sentiment, getting out of the way is not the goal of bioethics.
The goal of bioethics is to allow potentially beneficial research while ensuring that the risk of harm to participants and others is proportionate, reduced to the lowest practicable level, and within morally acceptable limits. The risk of harm can never be eliminated, but it can usually be reduced with minimal effort or cost. It may be as simple as testing a new piece of equipment one more time in a laboratory before attaching it to a human for testing.