By Brian D. Earp (@briandavidearp)
Introduction
Science and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent concerns about sloppy research, small sample sizes, and challenges in replicating major findings—concerns I share and which I have written about at length — I still believe that the scientific method is the best available tool for getting at empirical truth. Or to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst tool, except for all the rest.
Scientists are people too
In other words, science is flawed. And scientists are people too. While it is true that most scientists — at least the ones I know and work with — are hell-bent on getting things right, they are not therefore immune from human foibles. If they want to keep their jobs, at least, they must contend with a perverse “publish or perish” incentive structure that tends to reward flashy findings and high-volume “productivity” over painstaking, reliable research. On top of that, they have reputations to defend, egos to protect, and grants to pursue. They get tired. They get overwhelmed. They don’t always check their references, or even read what they cite. They have cognitive and emotional limitations, not to mention biases, like everyone else.
At the same time, as the psychologist Gary Marcus has recently put it, “it is facile to dismiss science itself. The most careful scientists, and the best science journalists, realize that all science is provisional. There will always be things that we haven’t figured out yet, and even some that we get wrong.” But science is not just about conclusions, he argues, which are occasionally (or even frequently) incorrect. Instead, “It’s about a methodology for investigation, which includes, at its core, a relentless drive towards questioning that which came before.” You can both “love science,” he concludes, “and question it.”
I agree with Marcus. In fact, I agree with him so much that I would like to go a step further: if you love science, you had better question it, and question it well, so it can live up to its potential.
And it is with that in mind that I bring up the subject of bullshit.
Bullshit in science
There is a veritable truckload of bullshit in science.¹ When I say bullshit, I mean arguments, data, publications, or even the official policies of scientific organizations that give every impression of being perfectly reasonable — of being well-supported by the highest quality of evidence, and so forth — but which don’t hold up when you scrutinize the details. Bullshit has the veneer of truth-like plausibility. It looks good. It sounds right. But when you get right down to it, it stinks.
There are many ways to produce scientific bullshit. One way is to assert that something has been “proven,” “shown,” or “found” and then cite, in support of this assertion, a study that has actually been heavily critiqued (fairly and in good faith, let us say, although that is not always the case, as we soon shall see) without acknowledging any of the published criticisms of the study or otherwise grappling with its inherent limitations.
Another way is to refer to evidence as being of “high quality” simply because it comes from an in-principle relatively strong study design, like a randomized control trial, without checking the specific materials that were used in the study to confirm that they were fit for purpose. There is also the problem of taking data that were generated in one environment and applying them to a completely different environment (without showing, or in some cases even attempting to show, that the two environments are analogous in the right way). There are other examples I have explored in other contexts, and many of them are fairly well-known.
An insidious tactic
But there is one example I have only recently come across, and of which I have not yet seen any serious discussion. I am referring to a certain sustained, long-term publication strategy, apparently deliberately carried out (although motivations can be hard to pin down), that results in a stupefying, and in my view dangerous, paper-pile of scientific bullshit. It can be hard to detect, at first, with an untrained eye—you have to know your specific area of research extremely well to begin to see it—but once you do catch on, it becomes impossible to un-see.
I don’t know what to call this insidious tactic (although I will describe it in just a moment). But I can identify its end result, which I suspect researchers of every stripe will be able to recognize from their own sub-disciplines: it is the hyper-partisan and polarized, but by all outward appearances, dispassionate and objective, “systematic review” of a controversial subject.
To explain how this tactic works, I am going make up a hypothetical researcher who engages in it, and walk you through his “process,” step by step. Let’s call this hypothetical researcher Lord Voldemort. While everything I am about to say is based on actual events, and on the real-life behavior of actual researchers, I will not be citing any specific cases (to avoid the drama). Moreover, we should be very careful not to confuse Lord Voldemort for any particular individual. He is an amalgam of researchers who do this; he is fictional.
Lord Voldemort’s “systematic review”
In this story, Lord Voldemort is a prolific proponent of a certain controversial medical procedure, call it X, which many have argued is both risky and unethical. It is unclear whether Lord Voldemort has a financial stake in X, or some other potential conflict of interest. But in any event he is free to press his own opinion. The problem is that Lord Voldemort doesn’t play fair. In fact, he is so intent on defending this hypothetical intervention that he will stop at nothing to flood the literature with arguments and data that appear to weigh decisively in its favor.
As the first step in his long-term strategy, he scans various scholarly databases. If he sees any report of an empirical study that does not put X in an unmitigatedly positive light, he dashes off a letter-to-the-editor attacking the report on whatever imaginable grounds. Sometimes he makes a fair point—after all, most studies do have limitations—but often what he raises is a quibble, couched in the language of an exposé.
These letters are not typically peer-reviewed (which is not to say that peer review is an especially effective quality control mechanism); instead, in most cases, they get a cursory once-over by an editor who is not a specialist in the area. Since journals tend to print the letters they receive unless they are clearly incoherent or in some way obviously out of line (and since Lord Voldemort has mastered the art of using “objective” sounding scientific rhetoric to mask objectively weak arguments and data), they end up becoming a part of the published record with every appearance of being legitimate critiques.
The subterfuge does not end there.
The next step is for our anti-hero to write a “systematic review” at the end of the year (or, really, whenever he gets around to it). In it, He Who Shall Not Be Named predictably rejects all of the studies that do not support his position as being “fatally flawed,” or as having been “refuted by experts”—namely, by himself and his close collaborators, typically citing their own contestable critiques—while at the same time he fails to find any flaws whatsoever in studies that make his pet procedure seem on balance beneficial.
The result of this artful exercise is a heavily skewed benefit-to-risk ratio in favor of X, which can now be cited by unsuspecting third-parties. Unless you know what Lord Voldemort is up to, that is, you won’t notice that the math has been rigged.
So why doesn’t somebody put a stop to all this? As a matter of fact, many have tried. More than once, the Lord Voldemorts of the world have been called out for their underhanded tactics, typically in the “author reply” pieces rebutting their initial attacks. But rarely are these ripostes — constrained as they are by conventionally miniscule word limits, and buried as they are in some corner of the Internet — noticed, much less cited in the wider literature. Certainly, they are far less visible than the “systematic reviews” churned out by Lord Voldemort and his ilk, which constitute a sort of “Gish Gallop” that can be hard to defeat.
Gish Gallop
The term “Gish Gallop” is a useful one to know. It was coined by the science educator Eugenie Scott in the 1990s to describe the debating strategy of one Duane Gish. Gish was an American biochemist turned Young Earth creationist, who often invited mainstream evolutionary scientists to spar with him in public venues. In its original context, it meant to “spew forth torrents of error that the evolutionist hasn’t a prayer of refuting in the format of a debate.” It also referred to Gish’s apparent tendency to simply ignore objections raised by his opponents.
A similar phenomenon can play out in debates in medicine. In the case of Lord Voldemort, the trick is to unleash so many fallacies, misrepresentations of evidence, and other misleading or erroneous statements — at such a pace, and with such little regard for the norms of careful scholarship and/or charitable academic discourse — that your opponents, who do, perhaps, feel bound by such norms, and who have better things to do with their time than to write rebuttals to each of your papers, face a dilemma. Either they can ignore you, or they can put their own research priorities on hold to try to combat the worst of your offenses.
It’s a lose-lose situation. Ignore you, and you win by default. Engage you, and you win like the pig in the proverb who enjoys hanging out in the mud.
Conclusion
As the programmer Alberto Brandolini is reputed to have said: “The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it.” This is the unbearable asymmetry of bullshit I mentioned in my title, and it poses a serious problem for research integrity. Developing a strategy for overcoming it, I suggest, should be a top priority for publication ethics.
Footnote
- There is a lot of non-bullshit in science as well!
Acknowledgement
This is a modified version of an article that is set to appear, in its final and definitive form, in a forthcoming issue of the HealthWatch Newsletter (no. 101, Spring 2016). See http://www.healthwatch-uk.org/. Please note that this essay was co-published online at Quillette Magazine, here: http://quillette.com/2016/02/15/the-unbearable-asymmetry-of-bullshit/. [Note: I have been made aware that the magazine has taken a controversial political turn in the time since I published this article.]
Just finished reading your piece ‘The unbearable asymmetry of bullshit’ and loved it. The answer, of course, is for higher education institutes to teach the academic literature On Bullshit (Harry Frankfurt’s 2005 book). Here is my early contribution:
On the field of bullshit – GW “Bill” Riedel, Alumnus, The Gateway, U. of Alberta, 12 MARCH 2008
Following an all-candidates’ meeting during the recent Ontario provincial election I had the privilege to briefly discuss with some journalism students the fact that “bullshit” has now become a respectable academic field of study. If one uses google scholar with the search string “on bullshit” one is rewarded 5,590 hits of what should be mainly the academic, scholarly, peer reviewed literature on the subject. While the revival of studies on bullshit is generally credited to the phenomenal success of Princeton University emeritus philosophy professor Harry Frankfurt’s 2005 book simply entitled “On Bullshit”, this author is of the opinion that Neil Postman’s paper delivered at the National Convention for the Teachers of English on November 28, 1969 in Washington, D.C. entitled “Bullshit and the Art of Crap-Detection” should be the first reference any student should read. Postman made the following point: “As I see it, the best things schools can do for kids is to help them learn how to distinguish useful talk from bullshit.” A little later he continues: “every day in almost every way people are exposed to more bullshit than it is healthy for them to endure”. It was left to Frankfurt to proclaim that “one of the most salient features of our culture is that there is so much bullshit”; however, the purpose of this short submission is to draw to the attention of students that the rapidly expanding academic literature on bullshit has something of interest to most students. Let me close by simply providing three examples:
1. Students of journalism and political science should be interested in – Brandenburg, Heinz, Short of Lying – The prevalence of bullshit in political communication, presented at the Annual Conference of the Political Studies Association, Reading, 4-6 April, 2006.
2. Accounting, business and science students might enjoy Queen’s University Norman B. Macintosh’s Accounting – Truth, Lies, or bullshit. A Philosophical Investigation.
3. For anyone wanting to go deeper into bullshit the book by Gary L. Hardcastle and George A. Reisch, 2006, Bullshit and Philosophy – guaranteed to get perfect results every time, Open Court, Chicago is a must library addition.
While the entire book is worth reading, although some chapters are heavy slugging, the following chapters are highly recommended:
Chapter 6 by University of British Columbia Professor Alan Richardson – Performing Bullshit and the Post-Sincere Condition, should be read by every student thinking of post graduate studies.
Chapter 14 by Heather Douglas – Bullshit at the Interface of Science and Policy: Global Warming, Toxic Substances, and Other Pesky Problems, page 215, is must reading for policy wonks, politicians and bureaucrats.
Canadians have made excellent contributions to the literature on bullshit as can be seen by visiting http://bullshitcitynorth.blogspot.com. Perhaps University of Manitoba professor John S. McCallum said it best in his 2005 Viewpoint – On Bullshit is not bullshit, Ivey Business Journal, Sept/Oct, page 1-3.
( http://www.thegatewayonline.ca/on-the-field-of-bullshit-20071030-1235 )
Thank you for this excellent reading list, which I have begun to go through.
Cheers,
Brian
The key problem with bullshit that enjoys a certain “truthiness” is attentional: its presence is a time-wasting distraction from the good stuff. The agnotological processes whereby e.g. the tobacco companies, or climate change denialists, skew “debates” by creating false controversy is well-studied. In theory scholarly publishing should be relatively resistant to this process by the diligent application of peer review, but this merely displaces the reader’s attentional process to a consideration of the judgement of the peer reviewers, potentially also prone to the retransmission of bullshit. The detail and verbosity of scholarly refutation amplifies the attentional problem.
Whilst freedom of expression worthy of the name must surely include the freedom to bullshit, the solution surely lies in maintaining reputational systems that offer the user efficient filtering systems that enable the basic command “never show content by this author again.” It would be an error to universalise such judgement, for we all have our own foibles and tolerances. For example the Facebook system assigns a single “interest score” to each post which is then used to rate each post for all users. Twitter’s “follow/unfollow” mechanism, which delegates filtering to the judgement of each individual user is much closer to what is required. The Pirate Party’s attempts to implement “liquid democracy,” whereby rank and file members anoint experts as delegates on any particular issue is also worthy of study.
Drinking at the commenting firehose at a heavily-trafficked site can be made less overwhelming using ranking systems (see Slashdot). And so on.
Frankfurt’s book is of course itself somewhat bullshitty, devoting as it does, a substantial part of its rather slender discussion to a lengthy argument establishing that bullshit is synonymous with humbug. (What is humbug?) But hey! It’s a fun cite on any reference list.
Comments are closed.