Skip to content

research ethics

Guest Post: Crispr Craze and Crispr Cares

  • by

Written by Robert Ranisch, Institute for Ethics and History of Medicine, University of Tuebingen

@RobRanisch

Newly discovered tools for the targeted editing of the genome have been generating talk of a revolution in gene technology for the last five years. The CRISPR/Cas9-method draws most of the attention by enabling a more simple and precise, cheaper and quicker modification of genes in a hitherto unknown measure. Since these so-called molecular scissors can be set to work in just about all organisms, hardly a week goes by without headlines regarding the latest scientific research: Genome editing could keep vegetables looking fresh, eliminate malaria from disease-carrying mosquitoes, replace antibiotics or bring mammoths back to life.

Naturally, the greatest hopes are put into its potential for various medical applications. Despite the media hype, there are no ready-to-use CRISPR gene therapies. However, the first clinical studies are under way in China and have been approved in the USA. Future therapy methods might allow eradicating hereditary illnesses, conquering cancer, or even cure HIV/AIDS. Just this May, results from experiments on mice gave reason to hope for this. In a similar vein, germline intervention is being reconsidered as a realistic option now, although it had long been considered taboo because of how its (side)effects are passed down the generations.Read More »Guest Post: Crispr Craze and Crispr Cares

Cross Post: What do sugar and climate change have in common? Misplaced scepticism of the science

Written by Professor Neil Levy, Senior Research Fellow, Uehiro Centre for Practical Ethics, University of Oxford

This article was originally published on The Conversation

Erosion of the case against sugar. Shutterstock

Why do we think that climate sceptics are irrational? A major reason is that almost none of them have any genuine expertise in climate science (most have no scientific expertise at all), yet they’re confident that they know better than the scientists. Science is hard. Seeing patterns in noisy data requires statistical expertise, for instance. Climate data is very noisy: we shouldn’t rely on common sense to analyse it. We are instead forced to use the assessment of experts.Read More »Cross Post: What do sugar and climate change have in common? Misplaced scepticism of the science

In Praise of Ambivalence—“Young” Feminism, Gender Identity, and Free Speech

By Brian D. Earp (@briandavidearp)

Introduction

Alice Dreger, the historian of science, sex researcher, activist, and author of a much-discussed book of last year, has recently called attention to the loss of ambivalence as an acceptable attitude in contemporary politics and beyond. “Once upon a time,” she writes, “we were allowed to feel ambivalent about people. We were allowed to say, ‘I like what they did here, but that bit over there doesn’t thrill me so much.’ Those days are gone. Today the rule is that if someone—a scientist, a writer, a broadcaster, a politician—does one thing we don’t like, they’re dead to us.”

I’m going to suggest that this development leads to another kind of loss: the loss of our ability to work together, or better, learn from each other, despite intense disagreement over certain issues. Whether it’s because our opponent hails from a different political party, or voted differently on a key referendum, or thinks about economics or gun control or immigration or social values—or whatever—in a way we struggle to comprehend, our collective habit of shouting at each other with fingers stuffed in our ears has reached a breaking point.

It’s time to bring ambivalence back.Read More »In Praise of Ambivalence—“Young” Feminism, Gender Identity, and Free Speech

Guest Post: Scientists aren’t always the best people to evaluate the risks of scientific research

Written by Simon Beard, Research Associate at the Center for the Study of Existential Risk, University of Cambridge

How can we study the pathogens that will be responsible for future global pandemics before they have happened? One way is to find likely candidates currently in the wild and genetically engineer them so that they gain the traits that will be necessary for them to cause a global pandemic.

Such ‘Gain of Function’ research that produces ‘Potential Pandemic Pathogens’ (GOF-PPP for short) is highly controversial. Following some initial trails looking at what kinds of mutations were needed to make avian influenza transmissible in ferrets, a moratorium has been imposed on further research whilst the risks and benefits associated with it are investigated.Read More »Guest Post: Scientists aren’t always the best people to evaluate the risks of scientific research

The unbearable asymmetry of bullshit

By Brian D. Earp (@briandavidearp)

Introduction

Science and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent concerns about sloppy research, small sample sizes, and challenges in replicating major findings—concerns I share and which I have written about at length — I still believe that the scientific method is the best available tool for getting at empirical truth. Or to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst tool, except for all the rest.

Scientists are people too

In other words, science is flawed. And scientists are people too. While it is true that most scientists — at least the ones I know and work with — are hell-bent on getting things right, they are not therefore immune from human foibles. If they want to keep their jobs, at least, they must contend with a perverse “publish or perish” incentive structure that tends to reward flashy findings and high-volume “productivity” over painstaking, reliable research. On top of that, they have reputations to defend, egos to protect, and grants to pursue. They get tired. They get overwhelmed. They don’t always check their references, or even read what they cite. They have cognitive and emotional limitations, not to mention biases, like everyone else.

At the same time, as the psychologist Gary Marcus has recently put it, “it is facile to dismiss science itself. The most careful scientists, and the best science journalists, realize that all science is provisional. There will always be things that we haven’t figured out yet, and even some that we get wrong.” But science is not just about conclusions, he argues, which are occasionally (or even frequently) incorrect. Instead, “It’s about a methodology for investigation, which includes, at its core, a relentless drive towards questioning that which came before.” You can both “love science,” he concludes, “and question it.”

I agree with Marcus. In fact, I agree with him so much that I would like to go a step further: if you love science, you had better question it, and question it well, so it can live up to its potential.

And it is with that in mind that I bring up the subject of bullshit.

Read More »The unbearable asymmetry of bullshit

Engineering a Consensus:   Edit Embryos for Research, Not Reproduction

Written by Dr Chris Gyngell, Dr Tom Douglas and Professor Julian Savulescu

A crucial international summit on gene editing continues today in Washington DC. Organised by the US National Academy of Sciences, National Academy of Medicine, the Chinese Academy of Sciences, and the U.K.’s Royal Society, the summit promises to be a pivotal point in the history of the gene editing technologies.

Gene editing (GE) is a truly revolutionary technology, potentially allowing the genetic bases of life to be manipulated at will. It has already been used to create malaria-fighting mosquitoes, drought resistant wheat, hornless cows and cancer killing immune cells. All this despite the fact GE only become widely used in the past few years. The potential applications of GE in a decade are difficult to imagine. It may transform the food we eat, the animals we farm, and the way we battle disease.Read More »Engineering a Consensus:   Edit Embryos for Research, Not Reproduction

“The medicalization of love” – podcast interview

Just out today is a podcast interview for Smart Drug Smarts between host Jesse Lawler and interviewee Brian D. Earp on “The Medicalization of Love” (title taken from a recent paper with Anders Sandberg and Julian Savulescu, available from the Cambridge Quarterly of Healthcare Ethics, here). Below is the abstract and link to the interview: Abstract What is love? A… Read More »“The medicalization of love” – podcast interview

Psychology is not in crisis? Depends on what you mean by “crisis”

By Brian D. Earp
@briandavidearp

*Note that this article was originally published at the Huffington Post.

Introduction

In the New York Times yesterday, psychologist Lisa Feldman Barrett argues that “Psychology is Not in Crisis.” She is responding to the results of a large-scale initiative called the Reproducibility Project, published in Science magazine, which appeared to show that the findings from over 60 percent of a sample of 100 psychology studies did not hold up when independent labs attempted to replicate them.

She argues that “the failure to replicate is not a cause for alarm; in fact, it is a normal part of how science works.” To illustrate this point, she gives us the following scenario:

Suppose you have two well-designed, carefully run studies, A and B, that investigate the same phenomenon. They perform what appear to be identical experiments, and yet they reach opposite conclusions. Study A produces the predicted phenomenon, whereas Study B does not. We have a failure to replicate.

Does this mean that the phenomenon in question is necessarily illusory? Absolutely not. If the studies were well designed and executed, it is more likely that the phenomenon from Study A is true only under certain conditions. The scientist’s job now is to figure out what those conditions are, in order to form new and better hypotheses to test.

She’s making a pretty big assumption here, which is that the studies we’re interested in are “well-designed” and “carefully run.” But a major reason for the so-called “crisis” in psychology — and I’ll come back to the question of just what kind of crisis we’re really talking about (see my title) — is the fact that a very large number of not-well-designed, and not-carefully-run studies have been making it through peer review for decades.

Small sample sizes, sketchy statistical procedures, incomplete reporting of experiments, and so on, have been pretty convincingly shown to be widespread in the field of psychology (and in other fields as well), leading to the publication of a resource-wastingly large percentage of “false positives” (read: statistical noise that happens to look like a real result) in the literature.

Read More »Psychology is not in crisis? Depends on what you mean by “crisis”