Skip to content

Ferretting out fearsome flu: should we make pandemic bird flu viruses?

Scientists have made a new strain of bird flu that most likely could spread between humans, triggering a pandemic if it were released. A misguided project, or a good idea? How should we handle dual use research where merely knowing something can be risky, yet this information can be relevant for reducing other risks?

The researchers were investigating what it took for the H5N1 virus to become able to spread effectively. Normally it doesn’t spread well among humans, a required trait for a pandemic virus. It could be that this is fairly easy to evolve through a few mutations, but equally well it could require some unlikely redesign: some researchers think that H5 viruses cannot become pandemic. Science writes:

Fouchier initially tried to make the virus more transmissible by making specific changes to its genome, using a process called reverse genetics; when that failed, he passed the virus from one ferret to another multiple times, a low-tech and timehonored method of making a pathogen adapt to a new host.

After 10 generations, the virus had become “airborne”: Healthy ferrets became infected simply by being housed in a cage next to a sick one. The airborne strain had five mutations in two genes, each of which have already been found in nature, Fouchier says; just never all at once in the same strain.

This airborne strain is also likely to work in humans. So now we know that it is not too hard for nature to evolve a dangerous bird flu and which genes matter. The first fact is directly useful: we must not be complacent about bird flu, which exists in many bird populations, and we should keep monitoring it. The second is useful for science and vaccine design, but also for enterprising bioterrorists.

This is a fine example of an information hazard, a piece of true information that is potentially harmful.

Should this research be published? Or even done? Richard Ebright at Rutgers said that these studies “should never have been done”, and a colleague expressed the same view – it should never have been funded. On the other hand the researchers seem to have the support of the flu research community and did consult with various institutions before the experiments. People clearly hold very different views, and being able to fully judge risks and benefits requires specialist knowledge about both virology, epidemics and bioterror risks – knowledge that might not even be known to anybody at present.

Right now the main issue is whether to publish the result, or more likely what methods to leave out. But it seems that evolving viruses in lab animal populations is hardly a secret that nobody else could figure out – it is a “time-honoured” method. The real question is whether there needs to be prior review for work on risky topics, perhaps an international risk-assessment system for pandemic viruses. At present there is hardly any even on a national level.

Paul Keim:

“The process of identifying dual use of concern is something that should start at the very first glimmer of an experiment,” he says. “You shouldn’t wait until you have submitted a paper before you decide it’s dangerous. Scientists and institutions and funding agencies should be looking at this. The journals and the journals’ reviewers should be the last resort.”

This is likely sensible. Although it also means that research about many topics that are likely important to mankind now gets an extra level of bureaucracy. In the US, new rules tightening restrictions about research into select agents with biological warfare potential, has also stifled research into protection against them – the extra rules make researchers aim for topics with less hassle and cost. Researchers also have motivation to downplay risks of their research and play up the problems of regulation: we should expect a measure of bias. Research sometimes shows unexpected risks too, that cannot be prevented by prior review. It is unlikely that any reviewer would have caught the discovery of how to make mousepox 100% lethal, and its worrying implications for small pox.

In situations like this, should one err on the side of caution or try to make an admittedly very uncertain risk/benefit estimation? There might be two cases: one is research that has the potential to do great harm, but not permanently damage the prospects of mankind. The second is research that could increase existential risk.

Nick Bostrom argues in a recent paper that reducing existential risk might be “a dominant priority in many aggregative
consequentialist moral theories (and as a very important concern in many other moral theories).” The amount of lost value is potentially staggering. This makes the “maxipOK rule” (“Maximize the probability of an “OK outcome,” where an OK outcome is any outcome that avoids existential catastrophe”) sensible: for existential risk, research that increases existential risk should not be done. Conversely, research that reduces it should be strongly prioritized. Another point in the paper is that we should strive to keep our options open: increasing our knowledge of what to value, what risks there are and how to steer our future is also extremely valuable.This may include refraining from some areas research until a later point in time when we are more likely to be wise or coordinated – or just because it gains us a few years extra of little risk while we think things through.

In the case of the flu virus things are less stark. Existential risks, because of their finality, pose different issues than merely bad risks. Even a human-made flu pandemic would not be the end of the world, just millions of lives. Doing the research now rather than later has given us relevant data on a risk that we can now reduce slightly. Maybe it could have been found some other way, but it is not obvious.  The information hazard is not enormous – the ferret method is not new, and the knowledge of the relevant genes is at most a part of the overall puzzle of how pandemics work. Bioterrorists are not that likely to aim for a flu as their doomsday weapon. The risk of release of the virus exists, but there are already millions of birds acting as a reservoir and evolutionary breeding ground: we know those ferrets are dangerous, unlike the doves on the street outside your window.

The real lesson seems to be that current coordination of dual use research, especially when it has interdisciplinary aspects, is still very bad. Hopefully this controversy will help foster construction of a smarter international systems of review that can take information hazards and existential risks into account. It will also have the unenviable job of making risk/benefit estimations where there is very little data and millions of lives are in the balance – on either side. But the enormity of that balance shows that such institutions would be important. They would not be frivolous wastes of taxpayer money or scientist time: even a slight improvement of our decisions about big risks nets a significant number of saved lives. We better remember that in the future when they occasionally will make the wrong decision.

Share on

19 Comment on this post

  1. No matter what argument put forward, you cannot guarantee 100% that the virus will not make it into earth's enviroment, and ultimately transmit to humans. Honestly there claims of it will help us battle the virus if it does mutate is a load of hog wash. Its about research, fame and ultimately making money. Anybody who disputes that is only self delusional. My question is this, if the virus does kill someone outside of the lab, if they were under a life for a life contract, would they be making virus's? Your creations are killers, simple fact. They are designed to be deadly infectious diseases. You create them so you can get funding to combat them with new treatments. Its all about money. Your not scientists, your not even worth calling human. Your monsters who create killers and the world would be a much better place if you all died from your own creations, that would be ironic and well deserved. Day after day, you come up with new ways to put our childrens future in danger to make a buck and a name for yourself. Well here is a few names. Death, Pestilence, Plauge, Abomination. Harsh words maybe but your beliefs that you are helping people is false. Combating a virus that already is, that is what your hoping to be. Creating a virus just to fight it makes you into something quite different. Forgive the clique but one is good and one is evil, simple as that.

    1. "You create them so you can get funding to combat them with new treatments. "

      I don't know from where you're getting that.

  2. These mad scientists shouldbe put up against a wall and shot. They could be responsible for killing millions.

    1. The bad guys are those who act in ways such that they cause enormously more people to die than their counterfactual optimal actions.

      The researchers could cause millions to die by doing the research, but also by not doing the research. Clearly, you shouldn't be castigating researchers just for them having great moral responsibility; you should be criticizing them for actually making bad decisions. Killing people actively and killing them by neglect don't differ fundamentally. It only the mechanism of blame that behaves differently. If the scientist make lethal mistakes, you are ready to jump in and blame them, hoping that other people would give you credit for it. If scientists fail to save millions of lives, you usually can't take credit for pointing that out.

      Expected lives saved and not saved should be carefully considered and weighted against each other. If you refuse to do that out of a misguided attitude that human lives should not be bargained with, you will very likely make bad decisions that will kill people.

      1. I do not totally disagree with your argument kovacsa about weighing up the benifits but this is going to far. Instead of creating a mutation that inhibits the original, they are made a virus that is specifically to be more infectious to humans. The idea is to help not make things potentially worse. As for jumping on scientists, I will jump on "scientists" if they are responsible for millions of lives, their creation, their responsibility. And I will support a scientist if he saves millions of lives, his creation, his reward.

        But what right do they really have to play with the lives of so many with such a potentially fatal plague as if it was nothing. Taking something that is already a monster and deliberately making it worse in the name of helping people. There is a line to be drawn and this crosses it as far as I'm concerned.

        And if it does make it out to the public, then what, claims of "oh we didnt know" "it wasn't our intention", too late, you just caused the death of nations with your pandemic. You cant create something like this and expect to wipe your hands clean of responsibility. Im sure people will be thanking you as they watch their families die a horrible death, thanks so much for helping us by creating a killer disease.

        Calous, cynical, very likely, short sighted, possibly. But some lines you just dont cross because the potential dangers are too great. Something this big requires a concensus of a majority. But scientists will never accept that simply because it stands in the way of progression. So is it about the scientist or the people, who has the right to decide the fate of everyone else by making such a virus. And who decides whether a virus like this is an acceptable risk or not, certainly not the people, we only find out after the fact.

        1. It comes down, then, to the empirical question of whether research like this will actually lead to a million people dying or not. I do think it unlikely. If it turns out to kill those people, I will be supremely surprised and horrified and will proceed to cease supporting or actively antagonizing similar researches. On the other hand… If this research turns out to be harmless or even mightily beneficial, will you be embarrassed of having been utterly wrong and then promptly change your mind?

          1. In other words, are you outraged by this research because it could potentially cause much harm, and thus you think that you can direct socially acceptable criticism towards it, or because you think that it will actually cause much harm?

  3. I think it is problematic to jump to conclusions about what motivates scientists. As a consequentialist I think it might not even matter unless it can be influenced to somehow produce better outcomes.

    The big question is on what grounds we draw lines against experiments. Risk for a very high negative impact, sure. But probability also matters (this is why I am pretty sanguine about the potentially far worse impact of particle physics experiments – the probability is so low that the risk doesn't matter). Plus we need to understand the potential benefits. The problem is that none of these factors are easy for an outsider to understand or evaluate (anybody here care to actually try to calculate the expected number of lives saved versus number of lives risked by the experiment?), and hence we need to have experts and insiders do the evaluation – and these days nobody trusts an expert, as some of the knee-jerk responses above suggest.

    It is worth considering that if we think some experiments are too dangerous to be allowed, we should also consider other activities that are equally risky (and presumably try to constrain them just as much). As I see it, basing one's defence on nuclear deterrence might be just as acceptable or unacceptable as risky flu research. And if one takes a hardline stance against probabilistically risking millions of lives, one should probably consider doing something about tobacco companies too.

  4. Anders, I'm a consequentialist too but I think the scientist's motivations do matter. I'm not saying that's the main issue here, but if the scientists' motivations were as venal as Nick D suggests then it would certainly influence my view of whether this type of research should be conducted. It is one thing to try to figure out ourselves what the likely pros and cons are, and another thing to want to ensure that the scientists themselves, who may be best placed to judge at least from some perspectives, are acting on the basis of (largely) noble intentions.

    In this context I do not take quite such a bleak view as Nick D. Doubtless the scientists' motivations are the usual mixed bag of noble and less noble intentions, of which they themselves are probably largely unaware. Perhaps the answer is to make them more aware.

    1. I wonder if scientists (or anybody else) who became aware of their less noble intentions actually would change their intentions, change jobs, or just accept the bad intentions as "part of who they are" and then cheerfully let them play out? People typically like to consider themselves moral, but cognitive dissonance is an amazing force.

      1. So Anders, are you saying that morals and ethics are a disposable commondity, discarded if the scientist feels its in his/her way? Wasn't that the same thinking behind the Nuremberg Experiments? If it is believed if it will help us in our medical knowledge, its acceptable? Yes this is the way of many but is this the ideal we should be working towards?

        My view was a little on the extreme I have to admit but the core principal remains. Does the end justify the means? Is creating a mutant version of a incredibly dangerous virus actually helping us? As I have said before, I am not against Bio Research, in fact I am quite the opposite. But instead of the money and effort put into creating this virus, it could have been spent in developing a way to inhibit the virus. Altering a virus is not a bad thing in my opinion except when its been done to make it even more deadlier to humans.

        The H1n1 was a biowarfare dream, consider it, release a couple of birds and wherever the birds do their business is dead. Now its even more contagious to humans. I just think the creation of this virus crossed the boundary from helping to harm. It may never hurt anyone, pray god, but the fact is, just one mistake is all it needs. Humans are prone to mistakes, we are not perfect by far.

        And if it does get out, can it be contained, will it mingle with other virus's and bacteria and learn new tricks? Most super bugs today where created by learning to adapt to high level anti-biotics and anti-virals. If it learns from them, can you stop it?

        I know most lab bacteria and virus have an in-built kill switch, so to speak, with only a certain lifetime, but mutations have occured which have destroyed these in the past.

        But I believe Peter Wick is right, as a member of the public, we need more awareness when it comes to these kinds of experiments. We certainly didnt get the chance to say no, we dont want that. But of course, "experts" know better. Ha, according to the "experts" I am dead, should have died in 2000. My own Oncology Specialist, Nuero Surgeon, Hematolgist, etc say I should be dead.

        All the training in the world can only prepare you so much, just like all the precautions in the world can prepare so much but in life, there are NO guarntee's

        1. Nick, I think you are reading in plenty in my comments that I did not say.

          Doing medical research and considering risks are ethical activities. That is, they deal with things that has big impacts and hence morally matter. Researchers should consider the morality of their activities, and that consideration is ethics. But just because you make ethical considerations or follow a moral code doesn't mean you will come to the right conclusions or do the right thing – people are often mistaken. How to reduce ethical or practical mistakes, *that* is the important question here. In my post I outlined some considerations on how to make the situation better, but I do not think they are perfect.

          As for H5N1 being a good biowarfare agent, I am deeply sceptical. I see plenty of problems: quick mutation, a vector that doesn't care about national boundaries, imperfect vaccines (even with vaccination a sizeable fraction of people are vulnerable) and a payload that is simultaneously too non-lethal (if you want to wipe out the enemy) and too lethal (if you just want to incapacitate a society without too bad repercussions for you when the truth gets out). There are better agents already in existence.

          The point about the public is interesting. To what degree should public opinion influence what experts are allowed to work on? On one hand the public is ill-informed and often primed by incorrect information, on the other hand the idea is that in a democratic society the public should be able to influence issues of common interest (and this most certainly is one). One answer might be that we use representatives: rather than having a referendum about everything we have a parliament representing us and making decisions in our name, and similarly we might have appointed experts or review boards that decide on what research to do. The problem is that such systems only work when there is a general trust of the representatives, and the ability to exert appropriate oversight.

      2. "People typically like to consider themselves moral, but cognitive dissonance is an amazing force." Strictly speaking, you mean self-deception. Cognitive dissonance is what happens when the self-deception starts to break down.

        My guess is that a bit of both would happen: they would to some extent accept their less noble intentions as "part of who they are", but they would also, to some extent, make efforts to align their actions with the kind of people they would really like to be.

  5. The same institute that WHO chief adviser Dr. Albert Osterhaus operates from and who mainly caused the global drugs companies to make billions upon billions by the WHO calling an official pandemic. Now they create the greatest human killer of all time in their Labs. Do I feel another tens of billions of dollars drugs scam coming on? Possibly. How do these non-intelligent people live with themselves when this could quite easily fall into the wrong hands…and they want to put it into print also. Sheer madness ! These people want locking up and the key throwing away. For the Swine Flu showed over a year ago that by the time a new vaccine was produced we would all be well dead. i.e. the Spanish flu took from start to finish 6 months to kill up to 100 million and where the Swine flu vaccine was only created in the Labs and then approved for use after 7 months 1 week…then we had to manufacture it for the world. Therefore I will say it again, the only way to stop such a thing happening (and according to WHO D-G Margaret Chan it is only a matter of time, not when) is to address the killer virus at source. Is there respectfully anyone listening out there?

    Dr David Hill
    World Innovation Foundation

  6. A recent development of the topic:

    http://www.nytimes.com/2011/12/21/health/fearing-terrorism-us-asks-journals-to-censor-articles-on-virus.html?_r=1&hp

    And for a less pro-US point of view:

    http://nos.nl/artikel/324009-vs-censureert-grieponderzoek-erasmus.html
    http://www.guardian.co.uk/world/2011/dec/21/bird-flu-science-journals-us-censor

    Now it raises another issue that perhaps one of the blog writers should address: is there something wrong with a country (USA) interfering in the research of another (Holland) and "strongly suggesting" them to deny the international publication of scientific information, even if they claim that doing that is for the good of the people (whatever that means)?

    1. Theo:
      The USA government PAID for the research in Holland–Fouchier is largely funded by an NIH contract. So yes, it's totally appropriate for the USA to direct what happens to research it legally owns. If Holland wants to step up (and the rest of Europe for that matter) and start funding research at anywhere near the levels the US does then maybe they can join the conversation.

      To all the other comments: Before this research, there was a large proportion of the scientific community that felt we should lower the priority of H5 research because it was incapable of acquiring human-to-human transmission. This is in the public record and goes back years. Several groups had tried similar research in the past and failed, leading to even more calls for dropping this as a potential pandemic virus. That this work was successful shows it is possible and possible with relatively few mutations. That was the point of the work and they shouldn't be penalized for solving a scientific question that had publicly been debated for years.

      1. I confess I have not seen Fouchier's contract and cannot judge the legal aspect of the matter – even thought I find it highly unlikely that the agreement has a clause that allows the sponsor to control the publication's content.

        What I want to highlight is the ethics of the whole thing. Is having money a valid reason for ignoring principles of freedom of research? I mean, are the US exempt from giving any reasonable justification for the censorship, only because they are financing the scientist?

Comments are closed.