Earlier this month, I attended a conference on Controlled Human Infection Studies in the Development of Vaccines and Therapeutics. These studies involve deliberately infecting healthy volunteers with a disease (such as malaria, typhoid, norovirus, or salmonella) in a controlled environment. This research has significant benefits for the development of vaccines [some of the benefits are set out here]. Given that these studies could result in the development of new vaccines, they could serve a crucial role in saving many lives. Nevertheless, intentionally infecting humans with diseases is potentially risky. The degree of risk for the volunteers will vary case to case, depending on the disease and the efficacy of treatment.
Intentionally infecting humans in order to develop vaccines is not a new idea. One historic example of this is the work of Edward Jenner, neatly summarised in the following:
“In 1796, [Jenner] carried out his now famous experiment on eight-year-old James Phipps. Jenner inserted pus taken from a cowpox pustule and inserted it into an incision on the boy’s arm. He was testing his theory, drawn from the folklore of the countryside, that milkmaids who suffered the mild disease of cowpox never contracted smallpox, one of the greatest killers of the period, particularly among children. Jenner subsequently proved that having been inoculated with cowpox Phipps was immune to smallpox. He submitted a paper to the Royal Society in 1797 describing his experiment, but was told that his ideas were too revolutionary and that he needed more proof. Undaunted, Jenner experimented on several other children, including his own 11-month-old son. In 1798, the results were finally published and Jenner coined the word vaccine from the Latin ‘vacca’ for cow.”
Now, the purpose of this post is to ask whether you think that there is necessarily anything morally problematic with these studies?
Assume that the volunteers are adults who understand that taking part in the study is risky. Moreover, assume that they make competent decisions to participate in the study. Does it matter that they are being intentionally infected with a disease? Would it make a difference to the permissibility of the study if the participants have a significant risk of death? If so, why? After all, it is a commonly held view that competent adults should be permitted to engage in risky activities, such as dangerous sports. It is also commonly thought that we should allow competent people to refuse medical treatment, even if this results in their death. Given that this is the case, on what grounds could we stop competent people participating in a risky activity that has potentially huge benefits for others?
It’s an interesting area, and I’d be very interested to know your thoughts.
I believe the issues surrounding challenge studies of vaccines are not different in kind from most other areas of research into healthy volunteers, only in terms of the specifics and perhaps degrees. I don’t see much inherent moral difference in exposing someone to risks via inejcting a disease compared to, say, injecting some novel drug – though there will be practical differences that must be attended to.
A helpful basic resource for ethical requirements of clinical resource can be found in Ezekiel Emanuel’s “What Makes Clinical Research Ethical?” (among other places, it can be viewed online here: http://striepen.uga.edu/infection/emanuel.pdf ) Emanuel lays out seven ethical requirements for clinical research; informed consent is just one of them. The fact that challenge studies are particularly risky is relevant to consent, of course – people should be properly informed of the risks. But it is also relevant for another requirement – a favorable risk-benefit ratio. That ethical requirement mandates that risks be minimized and the risks are proportionate to the expected benefits to society. It is impossible to evaluate vaccine challenge studies in the abstract, of course – whether that balance will be favorable depends entirely on the case at hand.
But you seem to flirt with the idea that Emanuel is mistaken – perhaps informed consent is all that really matters. We let people engage in death-defying stunts when properly informed, even if they have an unfavorable risk-benefit ratio, so why not let people engage in exceedingly risky research with no proportionate public benefit? This would imply that, as long as subjects are properly informed of the risks and benefits, we need not worry at all about the risk-benefit ratio of the study. I strongly disagree with this, and I’ll give a few quick reasons.
In the first place, clinical research is inherently uncertain. Though researchers may have some idea of the likely benefits of research, these expectations are often mistaken. Risks can easily be (innocently) understated in the consent process, and benefits overstated. Precautions are warrented in such cases of uncertainty.
Secondly, there is an informational imbalance such that the researchers are in a privileged epistemic position, one that could potentially be abused in order to further careers or make a profit. These sorts of abuses are really why we have research regulations in the first place; researchers are not evil, but their judgment can often be clouded by various incentives and limited perspectives.
Thirdly, even (or even especially) when presented with accurate and detailed information about risks/benefits, subjects are generally not experts in the field and comprehension of complex issues can be lacking. So we may not be able to say with confidnece that consent was truly and adequately informed.
Fourthly, and more generally, autonomy is not the only thing that matters. We should be independently interested in promoting people’s well-being, which may at times involve restricting people’s autonomy. This is sometimes derided as paternalistic, but I believe any alternative will be overly narrow-minded, fetishizing autonomy at the expense of other values.
And fifthly, allowing other risky activities can be justified without too much oversight because external agents are not in a good place to judge the risk-benefit ratio. Whether some sporting activity is worth the risk to the players will be relative to that player’s personal benefit and values. States should give people a good amount (though not infinite) space to determine the worth of such activities for themselves. Research, by contrast, is not undertaken for the sake of subjective enjoyment of the parties involved, but rather for the prospect of some general social good – one that is more objectively measurable. There is much less a worry (at least with healthy volunteers – complications emerge with patient participants), then, that by requiring that risks be minimized and there be an adequte risk-benefit ratio that one will problematically deny some participants some centrally valuable experience. Put another way, the personal autonomy costs of restricting research are generally much less than restricting leisure activities.
Also, the Jenner study seems horrifically unethical by modern standards. Not only was it done in children who would have greatly diminshed ability to understand and consent to the study, but there was no oversight whatsoever to ensure the children’s interests were being adequately protected. We may view him positively because the experiment was a success, but ex ante that was very uncertain and the same principles that justified his study have doubtless been employed by other maverick researchers whose subjects were not so lucky.
David’s mention of “competent adults… (and)… risky sports,” and Owen’s mention of “death-defying stunts” (that perhaps ought to be called, more accurately, “death-enticing stunts”) pretty well define the issue and the outcome.
We tolerate people engaging in all manner of high-risk behaviors for personal gratification, regardless of whether these activities provide any benefit (or incur any cost) to the community at-large. Many such behaviors require a willing seller to provide whatever-it-is to the buyer, or some other individual to facilitate the behavior in some way. In many such cases there are few to no safeguards as to whether a person understands the risks of their behavior.
In examples in which a person typically pays money for the means of their risky behavior, would the ethical tone be altered if the money was removed from the occasion? “Congratulations, here’s (a free bottle of booze) or (a free set of mountain climbing gear).”
What if the recipient was paid a token amount, or a not-so-token amount, to use the products in question (drink the booze or climb mountains, as the case may be)? A “not-so-token amount” might raise an issue if the amount was sufficient to put pressure upon an individual’s judgement; this in turn is relative to their own economic circumstances. A payment considered trivial by a wealthy person could be enough to pressure the judgement of a working person or a poor person.
To my mind the principles of privacy and of equal protection under the law lead to the definitive outcome that if we allow risk-taking for personal gratification, we can’t judge “the motivation to contribute to science” as being less worthy than other motives (e.g.”the motivation to get drunk”). In fact we can have nothing at all to say about motives or about the subjective state of risk-takers in general, other than that they should be informed of the risks they are taking or the penalties for certain kinds of behaviors (e.g. getting drunk and driving).
People routinely risk infectious diseases via behaviors such as going on ocean cruises (norovirus), living in slums (tuberculosis), and eating food that has been discarded by merchants into their refuse bins (salmonella). Some of these risks are taken from ignorance (cruises/norovirus), some are taken under ferocious economic pressure that would surely violate Emmanuel’s criterion (3) for fair subject selection. If we as a society tolerate the disease risks that are inherent to living in slums and eating out of refuse bins, then we can have nothing to say about other individuals taking deliberate risks of the same diseases as volunteer research subjects.
—-
Most of Emmanuel’s criteria have to do with factors that depend upon the characteristics of the researcher or are in the domain where scientific expertise is the key factor. Only items (6) (informed consent) and (7) (Ss’ wellbeing) unequivocally have as much to do with the characteristics (intelligence and motivations, and wellbeing generally) of the subjects.
I would add to criterion (3) (fair subject selection), that subjects must be chosen who have the intelligence and background knowledge to understand the big-picture considerations behind the study, and the risks to themselves for participating.
I would add to criterion (7) (respect for Ss) that the institution within which the researcher is working, must assume 100% responsibility for direct and/or consequential harms to Ss. If a subject has an adverse reaction to a research procedure, despite having given consent to the risk of such a reaction, the costs involved should be covered. Realistically this translates to a requirement for insurance with generous benefits.
—-
As for Jenner, clearly injecting children with cow pus fails today’s ethical standards. However, given the ravages of smallpox epidemics at the time, and the overall higher levels of suffering of all kinds that were accepted as the condition of humanity in general, it is understandable and even acceptable in context. Today we would no more accept such methods than we would accept surgery without anaesthesia, but this doesn’t render 18th century surgeons as moral monsters in retrospect.
Though, I should mention, today there is a subculture of parents who subscribe to anti-vaccination conspiracy theories, who actually seek to get their children infected with real diseases such as chickenpox, for the sake of “getting it over with” or “obtaining ‘natural’ immunity.” In some cases they even send infected objects through the post, such as lolly pops shared between sick kids and well kids, and even raw saliva. (It would not surprise me if they were also posting pus obtained from chicken pox pustules.) Anyone reading this who is in the position to read up on the subject (keyphrase “pox parties”) and convey the gravity of the situation to the proper authorities, is urged to do so.
I’m a medical student, and I recently visited a commercial clinical trials company where they took some of my blood for screening; depending on the results, I may be eligible for a study which involves infecting participants with the cold, influenza or RSV. From talking to my colleagues, those who think ‘it’s a bad idea’ generally have two principal concerns: frist, the uncertain nature of the risk involved, and second health is seen as a particularly valuable good, namely because good health is useful not matter what you want to do in life. In response to the second concern I point out that there are many other things people do that involve greater risks to their health, and this is where they tend to become incoherent; I think intuitions are being skewed because with other activities the health risk is hidden: if nothing goes wrong while horse riding say, you can forget about the risk to your health, whereas it’s hard to ignore the risk when you’re actually ill.
It is the extremely finest report I have study regards We’ve arrived at understand lots of skills in this subject.
Comments are closed.