Skip to content

Predictors of Alzheimer’s vs. the Hammer of Witches

Matthew L Baum

Round 1: Baltimore
I first heard of the Malleus Maleficarum, or The Hammer of Witches, last year when I visited Johns Hopkins Medical School in Baltimore, MD, USA. A doctor for whom I have great respect introduced me to the dark leather-bound tome, which he pulled off of his bookshelf. Apparently, this aptly named book was used back in the day (published mid-1400s) by witch-hunters as a diagnostic manual of sorts to identify witches. Because all the witch-hunters used the same criteria as outlined in The Hammer to tell who was a witch, they all –more or less- identified the same people as witches. Consequently the cities, towns, and villages all enjoyed a time of highly precise witch wrangling. This was fine and good until people realized that there was a staggering problem with the validity of these diagnoses. Textbook examples (or Hammer-book examples) these unfortunates may have been, but veritable wielders of the dark arts they were not. The markers of witchcraft these hunters agreed upon, though precise and reliable, simply were not valid.

Round 2: New York
Though the Malleus Maleficarum might seem at first an antiquated curiosity, it provides a cautionary tale relevant to modern psychiatry. Prediction or early diagnosis of Alzheimer’s disease is one area where the hammer might me relevant. I have noticed an increasing number of articles in the popular press about the topic. A well written one published recently in the New York Times (http://www.nytimes.com/2010/12/18/health/18moral.html?pagewanted=1&_r=2&hp) discusses the dilemma faced by some translational researchers (those trying to bridge the gap between laboratory bench and clinical bedside) about to disclose the results of experimental diagnostic tests ranging from spinal taps to brain scans to patients enrolled in longitudinal research programs or whether to offer the experimental tests to those outside the studies who might seek them. “Since there is no treatment, doctors wonder if they should tell people, years earlier, that they have the disease, or a good chance of getting it,” writes the NY Times journalist. The ethical issues behind the right to know, not know, or choose to know are fascinating and deserving of much discussion. I will limit myself, however, to a brief and unfinished discussion of the disturbing lack of distinction being made between early manifestation of “the disease” and merely “a good chance of getting it.” Out of 283 people who commented on this article, only a half a dozen make a distinction; the author uses the terms almost interchangeably. The fascinating comments section run rich with language like, ‘I would want to know—not knowing would be akin to not being told you had cancer;’ given this misunderstanding the comments section is understandably rife with accusations that there should be no hesitation to inform of test results and that any hesitations are unethical and stem from hard paternalism. But would these tests uncover a condition analogous to cancer? As one frustrated and insightful commenter posted, ‘they are not early tests for Alzheimer’s–they are tests for an increased risk. Please stop mixing them up because they are not the same.’ But the line between is much messier than one might think and for this reason needs to be clearly drawn if we hope to minimize the risk of creating a Malleus Alzheimerum.

Round 3: a risk of disease or a disease of risk?
Many of these predictive tests are almost side-effects of the long road towards uncovering the biological bases of disease – collecting clues on what goes wrong so that we can have hopes of developing specific and effective medicines or prophylaxis. But we are increasingly finding ourselves in that uncomfortable territory where the increased knowledge of what goes wrong outpaces our ideas on what to do in the here and now. Let us anticipate –reasonably- that with ongoing research we will continue to develop better and better predictors of diseases like Alzheimer’s. We may find that being at risk for a disease and having a disease based on risk become one and the same – depending on how you frame it. The reasoning follows like this: I am concerned about the devastation caused by stroke, so I do the research and find that strokes – a lot but not all the time – are caused by dislodged atherosclerotic plaques. I then say both that having lots of atherosclerotic plaques puts you at risk for stroke, but that lots of atherosclerotic plaques is also a disease called atherosclerosis. I then do more research and find that those with atherosclerosis tend to have higher blood pressure. I say that having higher blood pressure puts you at risk for atherosclerosis and also call this high blood pressure hypertension, its own disease. In this way we develop nested shells of conditions that put you at risk for disease A while also being disease B. But as you fall down the research rabbit hole, it becomes increasingly important to recognize where you are in the stratifications. To say that atherosclerosis is presymptomatic (or early diagnosed) stroke or that high blood pressure is presymptomatic (or early diagnosed) stroke (or atherosclerosis) would be misleading and potentially damaging. With very few exceptions, death being one, most clinically relevant conditions can be thought of in this dual manner; Alzheimer’s, for example could be incompletely defined as a prediction of lengthening episodes of memory and/or cognitive failure. At each layer of risk, there is a change, in proportion to the decrease in predictive validity of the disease about which you are concerned, in the weight that the information should have on action. To classify a lower shell as a presymptomatic version of the higher is to prematurely and inappropriately give it increased weight. With something as devastating as Alzheimer’s disease, this distinction becomes incredibly important. A sobering common response in the comments section was that a positive prediction would lead to lavish vacations that could not be afforded while others spoke of arrangements for euthanasia, responses the both of which stamp down the need for giving the tests the weight they deserve and no more. Unfortunately as outlined below, determining that weight might be more difficult than we expect.

Round 4: This is Spinal Tap
If, as the comments section of the NYT article suggests, a large percentage of people don’t differentiate between increased risk and early diagnosis, there is a very real need for widespread and careful discussion of the limits of the scientific data coming out of such tests – the extent of the increased risk that could be predicted and the uncertainties inherent in ongoing research (due to limits in our understanding of disease mechanism, limited generalizability related to sample size, exclusion of those with comorbidities etc.).  For example, one of the tests referenced involves a spinal tap (itself a risky and painful procedure, especially in the elderly) to measure amyloid beta levels in the spinal fluid.  Though Amyloid beta, a protein that accumulates in plaques in the brains of AD patients and thought to contribute to cognitive deficits characteristic of the disease, remains one of the best supported markers in the literature, it is still of disputed validity. As shown in the famed ‘nun study,’ as a large section of people without dementia symptoms can exhibit amyloid beta pathology (http://jama.ama-assn.org/content/277/10/813.abstract). Also, though in a 2002 study (http://www.nature.com/nrn/journal/v3/n10/abs/nrn938.html) an amyloid beta vaccine reduced plaques in patient brains, plaque reduction was not correlated with cognitive function (however this study was complicated by unanticipated and dangerous neuroinflammation in some of the subjects, which led to its discontinuation, and vaccines are still being pursued).  Currently, existence of amyloid plaques in one of the diagnostic criteria for Alzheimer’s, but there is a possibility that the aspects of the disease about which we care – namely cognitive deficit – could be as related to plaques similarly to the way witchcraft was to those criteria in the Malleus Maleficarum. There exists a risk that those exhibiting amyloid plaques might be at increased risk of Alzheimer’s precisely because having amyloid plaques is a definitional requirement for the disease but are not involved in the mechanism per se. This sort of “definitional artifact” is an uncomfortable risk whenever the mechanism of a disease is poorly understood. Whether this is the case here remains to be shown but ongoing research again points to amyloid as a mechanistic agent, this time in its soluble (free floating), not immobile plaque form (http://www.uni-tuebingen.de/uni/kxm/Courses/documents/240709-1.pdf); perhaps it is a valid marker after all. The point here is that there is always considerable healthy debate within the scientific field about the validity of this kind of marker precisely because they are on the cutting edge (for another example, see https://blog.practicalethics.ox.ac.uk/2010/12/cloudy-with-a-chance-of-dementia/).

Final Round?
Though much may be untangled by ongoing and important empirical work, the paths to the end-state diseases like Alzheimer’s seem to be more tortuous than we ever imagined. Just as atherosclerosis increases risk not only for stroke, as mentioned above, but also for heart attack and other organ failure, amyloid beta may be involved in Fragile X Syndrome, the most common form of inherited mental retardation (http://www.ncbi.nlm.nih.gov/pubmed/20088809); the APO E4 gene, the presence of which is associated with increased risk for Alzheimer’s, also increases risk of several other diseases including Multiple Sclerosis, Parkinson’s… and atherosclerosis (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2908393/?tool=pubmed#R8).  As this research continues, we should move away from the misleadingly simple terminology of ‘presymptomatic disease’ in favor of a hypertension-like model of risk that better acknowledges both these biological forks and the proper rung on the risk/disease ladder. Despite the empirical uncertainty, however, if patients want to have these tests, are willing and able to pay for them, are fully informed about both the significant limitations inherent to research diagnostics, the possible negative as well as the positive effects of the knowledge, and are willing to accept those risks, I see no reason why the patient should not be permitted to do so. I see no reason either why the physician/researcher should be required to administer the tests.

Share on

2 Comment on this post

  1. Thanks for this thoughtful article. It is certainly correct that one should not confuse elevated risk with presymptomatic disease. However, I disagree with the last sentence that physicians should not be required to administer these tests. If a test reveals that I have a 50% chance of developing a life threatening disorder, then it would be rational to know this and to plan life around that possibility, commensurate with its probability. Physicians should provide tests with only probabilistic answers. Life is about living in a probabilistic world. Nothing is ever certain and every decision we make must weigh up probabilities. Perhaps tests which yield very low probibility information should be low priorities in a public health service but I frankly can’t see why doctors should not have to provide tests which suggest a person has a 1% chance of some serious medical disorder if that person desires it. Probabilistic information is useful in life planning, as well as undertaking preventative health strategies. I believe it is irrational not to obtain relatively high probability estimates of serious diseases and doctors have a moral obligation to provide such tests.

  2. Thank you for the thought provoking comment, Julian. I am afraid that my response will be far too long. I certainly agree that probabilistic information is desirable, that nothing is ever certain (especially in the clinical lab) and that doctors should provide probabilistically predictive tests if the patient desires them. The problem comes with tests that are still battling it out in the trenches of ongoing research.

    Imagine the overall incident of disease A is 10 in 100. Some data suggests 50 out of 100 who “test positive” on the test will go on to develop disease A, another study on a slightly different patient population suggests 20 out of 100, while yet another with a slightly different end-point suggests 12.5 out of 100? Then the probability itself is unknown, not because the results are inconclusive in any of the individual research studies (let’s assume they all have a confidence interval above 95%), but because of differences in age, geography, ethnicity, gender, cultural diet of the sample populations or subtle differences in the protocols of the tests (some imaging studies, for example, outline by hand regions of interest b/c each brain is different while other studies use an outlining algorithm to save time).

    These sorts of discrepancies can be overcome through carefully considered aggregation, weighted averaging and meta-analysis of the data to get some sort of adjusted probability estimate (good science), but what if the data just doesn’t exist yet?? In Alzheimer’s research, the fact that people tend not to get the disease until they are very old puts a lot of practical constraints on the research; few if any labs can afford the time and money it would take to perform these tests now on people in their 40s and wait 20-30 years before publishing. So what most end up doing is looking in a stepwise manner at intermediate phenotypes (predictors of mild cognitive impairment, plaque and tangle load, white matter hyperintesities, hippocampal shrinkage) similar to investigating atherosclerosis and hypertension with a concern for stroke. These intermediate phenotypes save time and are invaluable research tactics, but each step must be carefully validated. What is going on is that researchers are developing tests with much promise but there is a demand for the tests NOW. The timelines of research and actual life are at odds in a very fundamental way.

    I imagine that few people would suggest that doctors should be required to prescribe drugs or treatments that they are not convinced work just because a person wants it now and is willing to pay. An example could be off-label uses of FDA-approved medications; doctors prescribing for condition B a drug approved to treat condition A. Perhaps doctors should be allowed to do this if they themselves think that the off-label use is effective (though if the side-effects are real and non-probabilistic – like the pain felt in a spinal tap – even this could be challenged) but all should not be required to do so. It should be no different for probabilistic predictive tests still in research. Doctors and researchers perhaps should be allowed to use tests like those being used in the Alzheimer’s Disease Neuroimaging Initiative (http://www.adni-info.org/Home.aspx for patients who do not qualify (see http://www.adni-info.org/TakingPartInADNI.aspx) or do not want to participate in the study if the doctors believe the preliminary data supporting the test’s validity (as a probabilistic test) is strong enough. But it would be morally objectionable to say that those who do not think the data is yet strong enough – and for that reason have become trained in the procedures and are currently testing the strength in new research – should be required to administer the test similarly.

    Doctors have a moral obligation to provide certain but not all tests and treatments that they might theoretically be able to provide. Just like in the production of new drugs, the evidence should meet a minimum threshold before one could make a strong argument that doctors gain such an obligation, albeit setting that minimum and deciding whether the evidence is above or below may be incredibly difficult. Rather than requiring doctors to provide such tests, we should move to increase accessibility to enroll in research studies in which they are being used, just as we increase accessibility to clinical trials for drugs.

    To turn the issue on its head, however: should it be permissible for ethics committees to prevent (as some of the university ethics boards overseeing the studies in the NYT article have done) medical scientists from disclosing the results of experimental tests if the participant wants it? The danger of the subject overweighting the result of the test is clear, but doesn’t this danger disappear if the subject understands the test’s questionable validity and other limitations? If the subject does understand, preventing them access to the results of a test they have already undergone would be unacceptably paternalistic. If accurately understanding requires a master’s degree in biochemistry, on the other hand, perhaps the committees are justified; but would this ever be the case? What about those with mild cognitive impairment?

Comments are closed.