Written by Simon Beard, Research Associate at the Center for the Study of Existential Risk, University of Cambridge
How can we study the pathogens that will be responsible for future global pandemics before they have happened? One way is to find likely candidates currently in the wild and genetically engineer them so that they gain the traits that will be necessary for them to cause a global pandemic.
Such ‘Gain of Function’ research that produces ‘Potential Pandemic Pathogens’ (GOF-PPP for short) is highly controversial. Following some initial trails looking at what kinds of mutations were needed to make avian influenza transmissible in ferrets, a moratorium has been imposed on further research whilst the risks and benefits associated with it are investigated.
The group Scientists for Science argues that such caution is not necessary and that it is damaging the progress of vital research into infectious diseases. They also point out that “The results of such research are often unanticipated and accrue over time” making the analysis of risks and benefits “difficult to assess accurately.”
This is no understatement. So far two assessments of the risks associated with GOF-PPP research have been produced. They give a range of estimates for the probability of a pandemic resulting from accidental release of engineered pathogens from a laboratory between 1 in 1,000 (Lipsitch and Inglesby 2014) and 1 in 33,000,000,000 (Fouchier 2015) per laboratory year.
Despite this, our natural tendency towards precaution regarding this research may be damaging. One recent study by the National Bureau of Economic Research found that the expected costs associated with global influenza, at around 0.7% of global income, are comparable with the long-term costs of climate change. Much of this cost results from the expected 700,000 deaths associated with influenza per year, mostly amongst vulnerable groups in less developed countries.
How then are we to make progress on these issues?
Scientists for Science have proposed that the only way to make progress is for scientists to debate the ethics of GOF-PPP research between themselves. However, they make two claims that do not stand up. Firstly they argue that, “If there is going to be further discussion about these issues, we must have input from outside experts with the background and skills to conduct actual risk assessments based on specific experiments and existing laboratories.”
The problem is that, due to its novelty, GOF PPP research is not amenable to risk assessments based on specific experiments and existing laboratories. For instance, one of the big differences between the two assessments of the risks of GOF PPP research that have been produced is how they work out the likelihood that a lab worker might become infected with a pathogen they are studying.
The more pessimistic Lipsitch and Inglesby assume that the probability of such an occurrence will reflect the background rate of laboratory infections. They point out that “Data on the probability of a laboratory-associated infection in U.S … show that 4 infections have been observed over <2,044 laboratory-years of observation, indicating at least a 0.2% chance of a laboratory-acquired infection per BSL3 laboratory-year.”
The more optimistic Fouchier on the other hand points out that none of these infections occurred at the, safer, BSL3+ laboratories at which GOF PPP research takes place and that none of them involved viruses. He therefore argues that “the risks of LAIs associated with work on viral pathogens should thus be estimated as less than 1 per 2,044 (<5 × 10−4 per laboratory-year).”
Both of these estimates are flawed. Whilst Fouchier is right to point out that infections that occur in lower security laboratories cannot be used to estimate the risk in higher security laboratories, he is wrong to assume that just because there have been no accidents so far at such laboratories they are therefore safe. Only a tiny proportion of the research in infectious pathogens is conducted at such laboratories and it may have been that scientists working in these institutes have just been lucky thus far.
As Gordon Woo, a professional catastrophist at Risk Management Solutions, points out, “All manner of unforeseen surprising catastrophes have been close to occurring, but ultimately did not materialize.” Our failure to take account of such counterfactual catastrophes, and to focus only on specific experiments and existing laboratories, can easily lead us to underestimate catastrophic risks.
In fact there have recently been three events, each of which could be seen as a narrowly avoided catastrophe, relating to smallpox, anthrax, and avian influenza. These events did not lead to any infections and it could be argued that they did not pose significant risk of infection either. However, they do provide the basis for a counterfactual analysis of the risks associated with GOF PPP research. Unfortunately such counterfactual analysis does not come naturally to practitioners in the life sciences, who are trained to look for evidence of what is, rather than what might have been.
This leads me to the second troubling statement from Scientists for Science. They argue that discussion about the future of GOF PPP research would be best facilitated by scientific organizations such as “the International Union of Microbiological Societies or the American Society for Microbiology,” or national academies, “such as the National Academy of Sciences, USA.” However, some of the most pressing concerns with such research fall outside of the domain of the life sciences.
Since 1700 there have been 6 severe flu pandemics (associated with over 2 million excess deaths), one of which has been ‘very severe’ (the 1918 flue pandemic, which caused around 50 million excess deaths). This suggests a natural occurrence of severe flu pandemics that is far in excess of even the most pessimistic estimates about the risk of GOF-PPP research. Therefore, if there is even a moderate chance that GOF-PPP research will produce an effective treatment for pandemic influenza it is likely to prove worthwhile on a risk-benefit analysis.
However, finding a treatment for pandemic influenza may not be the most economically viable form of GOF PPP research. Far more lucrative would be research into seasonal influenza, providing more effective treatments for a small number of people in the developed world.
On the other hand the costs a pandemic of genetically engineered super flu would be borne disproportionately by people who cannot afford effective medical treatment. This mismatch between those who are most likely to reap the benefits of GOF PPP research and those who are most likely to bear the costs creates a moral objection to GOF PPP research that is far more serious than the difficulties in assessing its associated risks and benefits.
Most scientists would aspire to producing research that will be to the benefit of all of humanity; however, sadly, this is not always under their control. It forms part of a wider discussion about the funding of scientific research and the allocation of global healthcare resources. Usually this discussion is about who gains the benefits from medical research; however with GOF-PPP research the debate also extends to who is likely to bear the potential costs of that research as well.
Meanwhile debate within the scientific community shows no sign of abating. Perhaps the time has come when, despite its controversial status, the ethics of GOF-PPP research needs to be debated outside of the scientific community as well.