In his 2018 book, the philosopher of science, Jacob Stegenga defends the view “that we should have little confidence in the effectiveness of medical interventions.” (Stegenga 2018) On the face of it, he acknowledges, this position seems unreasonable: most of us can think of myriad ways in which modern medicine has improved – perhaps saved – our own lives and the lives of those close to us. The asthma attack I had as a baby, effectively treated at the time and subsequently managed through the use of seemingly magical medications which relax the muscles around the airways, opening them up and allowing air to pass freely again. Or the schoolfriend whose ruptured appendix could have resulted in a fatal infection, but for emergency surgery and the administration of antibiotics. Or the countless lives made less painful by the availability of cheap and safe painkillers.
Medical sceptics tend to get a bad rep – anti-vaxxers who risk the lives of children by regurgitating debunked myths about the links between vaccines and autism, leading to dips in herd immunity and disease outbreaks; credulous folk who believe in the mystical powers of homeopathy and eschew conventional therapies in favour of potions that contain little more than water. This is not the sort of company one wishes to associate with.
Yet there are many well-respected physicians, journalists and epidemiologists who are straightforwardly sceptical of the benefit of much medical research. John Ioannidis, a medical doctor and ‘meta-researcher’ – someone who conducts research into research practices – has detailed in numerous highly influential papers the flaws in medical research methods and publication practices. The most famous of his papers, provocatively titled ‘Why Most Published Research Findings Are False’, has been cited 6890 times at last count. (Ioannidis 2005)
Here are a few of the reasons for adopting a position of medical nihilism: First, there are a set of problems associated with research methodologies and trial design. The ‘gold standards’ of evidence based medicine, randomised controlled trials (RCTs) and meta-analyses, whilst invaluable for testing the effectiveness of medical interventions, can also be subject to poor design and execution through inept practices and explicit and implicit biases. Practices such as outcome switching, p-hacking and selective publication, have been well-documented and result in more ‘positive’ results than is warranted. For instance, in clinical trials, many different outcomes are measured. The problem with this is that, just by chance, at least some of these outcomes are likely to show improvement. To avoid capitalising on this chance, researchers publish a protocol of the trial before it’s even conducted, identifying which outcome counts as the definitive test. However, researchers sometimes do not stick to these commitments, and engage in outcome switching: they report outcomes which did show an effect, rather than those that were pre-specified. This significantly increases the likelihood of reporting effects which occurred by chance, rather than through some reliable consequence of the intervention. P-hacking similarly involves the selection of a statistical approach to analysing outcome data based on its a posteriori ability to provide statistically significant results rather than on it’s a priori appropriateness. Finally, selective publication is a cruder means to a similar end: conduct several trials but only publish those which show strongest support for the treatment.
These point to ways in which research methods can be malleable, in Stegenga’s terminology. He argues that there are many fine-grained decisions made in the processes of designing, executing, and analysing medical research which can affect the ultimate results. This malleability is more likely to operate in the direction of ‘bending’ results to show a false positive (i.e. show an intervention to be more effective than it really is) than a false negative. Moreover, they operate in the opposite direction when it comes to harms of interventions, resulting in these being underestimated. This can result from intentional, cynical attempts to profit from medical research and pharmaceutical sales; or more ambiguous, perhaps well-intentioned attempts to ‘clean-up’ messy findings or highlight only the most important results. Either way, the vast financial (and other) incentives in play lead to publication of unduly-positive, misleading results and an underestimation of the harms of many medical interventions.
As well as methodological flaws in the production of medical research, other forces can convince us that medicine is more effective than it is. For instance, people tend to visit the doctor when they’re at their sickest. This means that they’re likely to feel better afterwards anyway, regardless of any treatment they’re given. Yet to the individual, it may well appear that the treatment caused them to get better. Placebo effects may play some role (though conceptualisation of what should count as a ‘placebo effect’ is tricky). Perhaps more important for recovery could be the basic kinds of care that often come alongside medical therapies, such as time off work to recover, or support from family and friends. There are also patterns in the ways people tell others about their medical treatments. People who experience positive effects are more likely to tell others about it than those who experience no or negative effects. This has been described in the context of reviews of medical products on Amazon.com and could be happening via other channels of communication as well. (De Barra 2017) So, once medical interventions are in circulation, there are a number of reasons why we may tend to think they’re doing more good than they in fact are.
Stegenga focuses on some of the most commonly used pharmaceuticals, such as statins to reduce high cholesterol; drugs to lower blood pressure; drugs to control blood sugar in type II diabetes; and SSRIs (selective serotonin re-uptake inhibitors) for depression. Health services spend huge sums of money on these medications which are of dubious benefit and which cause under recognised harms. But de-implementation is difficult, and it can be hard to change doctors’ prescribing habits and patients’ expectations, even in the face of compelling evidence of limited benefit and / or noteworthy harms. Some medications – such as antibiotics and insulin for type I diabetics – when used appropriately, can be of huge, life-transforming benefit. But such ‘magic bullets’ are rare, and we should not build our healthcare system around a dependence on discovering more.
In combination, the methodological flaws that plague medical resarch and publication practices, vast incentives to exaggerate benefits and under-report harms, forces operating to make medical treatments appear more effective once in circulation than they really are, followed by systemic reluctance to abandon those practices which (against the odds) have actually been shown to be ineffective/harmful, make a compelling case for taking the position of medical nihilism seriously.
Working in an interdisciplinary field, I’m interested in what the expectations of philosophers and ethicists – some of whom have formal medical or other scientific training but many of whom don’t – should be with regards to our use of empirical research in health-related fields. Increasingly, I think philosophers and ethicists working in bioethics have a duty to be sceptical friends of medical research and practice. It is tempting to be swept up in the enthusiasm for new and innovative therapies and to trust the claims of those convinced they have identified the Next Big Thing in medicine. Science fiction technologies may provide more interesting thought experiments to test ethical theories, but failing to caveat such musings with cautionary warnings that, as yet, benefits are unproven and harms uncertain is to contribute to the processes which create false optimism and facilitate ineffective medical treatments. And to be able to do this, we must be literate in medical research and the evidence bases of the technologies and treatments that we discuss.
Since we depend on the research, both as academics and as future patients, we should all support our colleagues in meta-research and evidence based medicine and push for increased scrutiny of publication practices, research methodologies, and conflicts of interest. Campaigns like alltrials, for instance, seek to address problems like publication bias and outcome switching ensuring that all clinical trials are pre-registered with their full methods and summary results reported. It is not just for-profit companies like Big Pharma that need to be scrutinised – individual researchers, labs and universities also have numerous incentives to overestimate ‘positive’ results. In addition, healthcare providers such as the NHS and Public Health England may be subject to political motivations and perverse incentives.
A widespread loss of trust in the medical profession could do serious harm. Given that assumption, it is tempting to resist highlighting the failures of medical practice and stop short of advocating sceptical positions such as medical nihilism. I don’t know what is right here, and it may be that, even given the problems with the medical evidence base outlined, the best strategy for most individuals, and for the population as a whole, will be to follow medical advice as closely as possible. Yet I worry that this conservative response is accepting a bad status quo, and failing to take seriously the costs – both to health and finances – caused be ineffective and harmful medical treatments.
References
Stegenga, J. (2018) Medical nihilism, Oxford University Press.
Ioannidis, J. P. (2005) ‘Why most published research findings are false’, PLoS medicine, 2(8), e124.
De Barra, M. (2017) ‘Reporting bias inflates the reputation of medical treatments: A comparison of outcomes in clinical trials and online product reviews’, Social Science & Medicine, 177, pp.248-255.