Skip to content

Experimenting with oversight with more bite?

It was probably hard for the US National Science Advisory Board for Biosecurity (NSABB) to avoid getting plenty of coal in its Christmas stockings this year, sent from various parties who felt NSABB were either stifling academic freedom or not doing enough to protect humanity. So much for good intentions.

The background is the potentially risky experiments on demonstrating the pandemic potential of bird flu: NSABB urged that the resulting papers not include “the methodological and other details that could enable replication of the experiments by those who would seek to do harm”. But it can merely advice, and is fairly rarely called upon to review potentially risky papers. Do we need something with more teeth, or will free and open research protect us better?

During the holidays I watched the film Contagion, recommended to me by an epidemiologist (high praise indeed). It is worth seeing and considering. The film depicts a plausible outbreak of a viral pandemic, ending with at least 26 million dead globally. In this fairly-bad-case scenario there is still no risk of humanity going extinct or end of civilization, but certainly there are major threats to life and liberty, hard practical and ethical decisions, and all societies will no doubt be permanently marked by the experience. It is not a bad benchmark to consider the work of NSABB against (to reach existential risk level we need to be very unlucky or run into deliberate biotechnological genocide, something research biosecurity cannot do much about).

As I argued in my previous post on the topic, as long as the threat is not an existential threat to the survival of the species it is “merely” a risk-benefit issue: very weighty, but it does not overrule all other considerations. If we assume a 1% chance of a major pandemic per year (one 1918 flu per century) with 26 million dead (rather than 50+ million in the 1918 case) it corresponds to an expectation of 260,000 dead per year – just between the mortality rates of pancreatic and cervical cancer. No matter the seriousness of a pandemic there will be other things – liberty, economics, science, etc. – that have to be weighed against safety: the proper question (as noted by Alan in his previous post) is how the weighing is done (and by who).

Right now the safety of biotechnology research, especially the information hazard aspect of publishing results, is handled by a hodgepodge of local or national rules, various voluntary guidelines and the goodwill of scientists. It is very likely that the weighing is not done right and in an impotent fashion. No matter where one’s intuitions lie, it is clear that spending some effort at improving the system – preferably before something untoward happens that forces reform – would be effort well spent.

One constructive attempt at doing this can be found in a report from the Center for International and Security Studies at Maryland University, Controlling Dangerous Pathogens: A Prototype Protective Oversight System. It sketches out an oversight system on the local, national and international level aimed at improving biological security:

In an effort to encourage productive discussion of the problem and its implications, this monograph discusses an oversight process designed to bring independent scrutiny to bear throughout the world without exception on fundamental research activities that might plausibly generate massively destructive or otherwise highly dangerous consequences. The suggestion is that a mandatory, globally implemented process of that sort would provide the most obvious means of protecting against the dangers of advances in biology while also pursuing the benefits. The underlying principle of independent scrutiny is the central measure of protection used in other areas of major consequence, such as the handling of money, and it is reasonable to expect that principle will have to be actively applied to biology as well.

It does this by 1) instituting licensing of institutions and individuals that handle risky agents (and vetting of students, janitors and others not formally part of the system), and 2) a system of independent peer review of proposed projects, with review boards on higher levels dealing with agents of more concern (all the way up to a top international level considering agents of extreme concern) and deciding on the conditions for approval. Information disclosure practices would be well-defined, as well as risk-benefit decision criteria, accountability and verification.

It is a lovely armchair product with some serious thinking behind it.

The real problem is of course that building international institutions is hard. It is not impossible, and sometimes worth it. The proposed scheme is a not too outlandish extension of current practices and would no doubt be supported by many scientists and policy-makers, yet it would need funding, international agreements, overcome individual, institutional and national resistance, as well as build a base of credibility within the scientific community. It can be done, but it would take much dedicated effort.

The proposal recognizes this and points out a useful feature: even a partial implementation is better than nothing. Even if it is just Ruritania that sets up a proper oversight system that means – assuming the oversight works – that the world has become slightly safer. The more who joins, even with imperfect systems, the better.

Trying to implement proper oversight also has another beneficial effect: plenty of critics will be motivated to articulate potential problems. This is very useful, even if some critics are not themselves motivated by love of truth or safety. Trying to actually formulate acceptable professional standards, risk-benefit decision-making criteria, appropriate levels of disclosure and all the other elements of proper oversight will attract far more attention and intellectual capital than wishing for a pre-packaged solution: usually the best way of getting something done well is to create a bad first sketch and see everybody flock to outdo what is there on the canvas. The Maryland report is an excellent seed for this kind of process: the more competing projects it generates, the better.

A sceptic might worry that experimenting with new forms of oversight is in itself risky: they might waste resources or impair science if implemented badly, and like much bureaucracy they might be hard to dislodge once they come into existence. This is a valid input to the process – we better make sure whatever oversight eventually develops can be held accountable and has incentives to be useful rather than self-serving – but it does not mean it is wrong trying to understand what kind of oversight could work. Rather, we should avoid premature convergence on a fixed institution. At present we do not have the knowledge to properly answer this post’s question about where the proper balance between openness and precaution lie. Figuring it out is not just a nice exercise for the admin-heads out there, but an actual matter of (probabilistic) life and death of thousands or millions.

Share on