Ferretting out fearsome flu: should we make pandemic bird flu viruses?
Scientists have made a new strain of bird flu that most likely could spread between humans, triggering a pandemic if it were released. A misguided project, or a good idea? How should we handle dual use research where merely knowing something can be risky, yet this information can be relevant for reducing other risks?
The researchers were investigating what it took for the H5N1 virus to become able to spread effectively. Normally it doesn’t spread well among humans, a required trait for a pandemic virus. It could be that this is fairly easy to evolve through a few mutations, but equally well it could require some unlikely redesign: some researchers think that H5 viruses cannot become pandemic. Science writes:
Fouchier initially tried to make the virus more transmissible by making specific changes to its genome, using a process called reverse genetics; when that failed, he passed the virus from one ferret to another multiple times, a low-tech and timehonored method of making a pathogen adapt to a new host.
After 10 generations, the virus had become “airborne”: Healthy ferrets became infected simply by being housed in a cage next to a sick one. The airborne strain had five mutations in two genes, each of which have already been found in nature, Fouchier says; just never all at once in the same strain.
This airborne strain is also likely to work in humans. So now we know that it is not too hard for nature to evolve a dangerous bird flu and which genes matter. The first fact is directly useful: we must not be complacent about bird flu, which exists in many bird populations, and we should keep monitoring it. The second is useful for science and vaccine design, but also for enterprising bioterrorists.
This is a fine example of an information hazard, a piece of true information that is potentially harmful.
Should this research be published? Or even done? Richard Ebright at Rutgers said that these studies “should never have been done”, and a colleague expressed the same view – it should never have been funded. On the other hand the researchers seem to have the support of the flu research community and did consult with various institutions before the experiments. People clearly hold very different views, and being able to fully judge risks and benefits requires specialist knowledge about both virology, epidemics and bioterror risks – knowledge that might not even be known to anybody at present.
Right now the main issue is whether to publish the result, or more likely what methods to leave out. But it seems that evolving viruses in lab animal populations is hardly a secret that nobody else could figure out – it is a “time-honoured” method. The real question is whether there needs to be prior review for work on risky topics, perhaps an international risk-assessment system for pandemic viruses. At present there is hardly any even on a national level.
“The process of identifying dual use of concern is something that should start at the very first glimmer of an experiment,” he says. “You shouldn’t wait until you have submitted a paper before you decide it’s dangerous. Scientists and institutions and funding agencies should be looking at this. The journals and the journals’ reviewers should be the last resort.”
This is likely sensible. Although it also means that research about many topics that are likely important to mankind now gets an extra level of bureaucracy. In the US, new rules tightening restrictions about research into select agents with biological warfare potential, has also stifled research into protection against them – the extra rules make researchers aim for topics with less hassle and cost. Researchers also have motivation to downplay risks of their research and play up the problems of regulation: we should expect a measure of bias. Research sometimes shows unexpected risks too, that cannot be prevented by prior review. It is unlikely that any reviewer would have caught the discovery of how to make mousepox 100% lethal, and its worrying implications for small pox.
In situations like this, should one err on the side of caution or try to make an admittedly very uncertain risk/benefit estimation? There might be two cases: one is research that has the potential to do great harm, but not permanently damage the prospects of mankind. The second is research that could increase existential risk.
Nick Bostrom argues in a recent paper that reducing existential risk might be “a dominant priority in many aggregative
consequentialist moral theories (and as a very important concern in many other moral theories).” The amount of lost value is potentially staggering. This makes the “maxipOK rule” (“Maximize the probability of an “OK outcome,” where an OK outcome is any outcome that avoids existential catastrophe”) sensible: for existential risk, research that increases existential risk should not be done. Conversely, research that reduces it should be strongly prioritized. Another point in the paper is that we should strive to keep our options open: increasing our knowledge of what to value, what risks there are and how to steer our future is also extremely valuable.This may include refraining from some areas research until a later point in time when we are more likely to be wise or coordinated – or just because it gains us a few years extra of little risk while we think things through.
In the case of the flu virus things are less stark. Existential risks, because of their finality, pose different issues than merely bad risks. Even a human-made flu pandemic would not be the end of the world, just millions of lives. Doing the research now rather than later has given us relevant data on a risk that we can now reduce slightly. Maybe it could have been found some other way, but it is not obvious. The information hazard is not enormous – the ferret method is not new, and the knowledge of the relevant genes is at most a part of the overall puzzle of how pandemics work. Bioterrorists are not that likely to aim for a flu as their doomsday weapon. The risk of release of the virus exists, but there are already millions of birds acting as a reservoir and evolutionary breeding ground: we know those ferrets are dangerous, unlike the doves on the street outside your window.
The real lesson seems to be that current coordination of dual use research, especially when it has interdisciplinary aspects, is still very bad. Hopefully this controversy will help foster construction of a smarter international systems of review that can take information hazards and existential risks into account. It will also have the unenviable job of making risk/benefit estimations where there is very little data and millions of lives are in the balance – on either side. But the enormity of that balance shows that such institutions would be important. They would not be frivolous wastes of taxpayer money or scientist time: even a slight improvement of our decisions about big risks nets a significant number of saved lives. We better remember that in the future when they occasionally will make the wrong decision.