Skip to content

Academic freedom isn’t free

Should scientists be allowed to publish anything, even when it is wrong? And should there be journals willing to accept everything, as long as it seems interesting enough? That is the core of a debate that has blossomed since the journal Medical Hypotheses published two aids-denialist papers. Medical Hypotheses is a deliberately non-peer reviewed journal: the editor decides whether to publish not based on whether papers are true but whether they are bold, potentially interesting, or able to provoke useful discussion. HIV researchers strongly objected to the two papers, making the publisher Elsevier withdraw them. Now there are arguments for removing Medical Hypotheses from PubMed, the index of medical literature. Ben Goldacre of Bad Science and Bruce G Charlton, editor of Medical Hypotheses, debate the affair on Goldacre's blog. Are there scientific papers that are so bad that there should not be any journal outlet for them?

Goldacre makes the argument that the papers misrepresent real data and other research. People convinced about aids denialism often use references to
apparently scientific papers as arguments from authority to support the
claim that they have a valid, evidence-based position. Given that denialism has led to the unnecessary deaths of an estimated 330,000 people, publishing such papers may actually do real harm to people. A similar argument can be made against anti-vaccination papers that have also been published in the journal.

Charlton argues that there is a need for non-peer reviewed journals so that bold, unconventional hypotheses can be expressed.Given that the truth of a scientific paper can only be determined in retrospect it does not matter if the papers turn out to be wrong. Ideas should not be suppressed, but rather exposed to critique (and occasional refutation). Forcing Medical Hypotheses out from the academic market will reduce the freedom of academic thinking and expression.

The main issue is whether lay people are at risk from incorrect information. While many scientists can tell the reliability of a paper based on their training, it is less clear to an outsider. This is of course a general problem in any profession. However, science enjoys a particular status of objectivity and relevance that both needs to be maintained (both for truth-seeking purposes and to maintain professional status), and can relatively easily be borrowed to add credibility to otherwise suspect arguments. It appears likely that spurious science claims can convince people about the validity of many things, although its effectiveness may be limited beyond adding a veneer of credibility (how many care enough to follow up the "scientific" claims made for shampoos and health supplements?). A more serious risk may be that some people falsely but strongly convinced about something can buttress their position using biased scientific claims, spreading irrationality further.

John Moore, one of the scientists who complained to Elsevier, puts it this way in a comment:

"As scientists, we CANNOT live in Ivory Towers and natter away about our
“rights” to publish anything and everything we want, whatever the
consequences. Those rights are actually privileges, ones that are, in
effect, granted to us by the public who fund what we do. We should
respect that and act accordingly."

From this perspective, scientists have a moral duty to both keep their own profession clean and to provide the best information available. Otherwise the public's money and support would be misplaced.

However, science also requires an open discourse where ideas are proposed, tested and analysed. It thrives on competing opinions, since they spur investigation. And famously, occasionally truly outrageous ideas turn out to be right despite fierce resistance. It would be a failure of public trust to form orthodoxies unwilling to consider certain possibilities. This is a very real problem, since science is done by humans – there are cognitive biases, cliques, funding issues, economic and political games, fashions and paradigms that make certain positions less easy to propose and maintain independent of their correctness.

Peer review is intended to deal with some of these problems but introduces others. It is by no means a guarantee of lack of bias, and the quality control effect is somewhat random. At the same time it does reduce the chance of publishing papers that are incompatible with commonly held views. It is always possible to self-publish them anyway – there is no shortage of books, blogs and other media with radical ideas, but usually they lack the inferred credibility given to scientific journals.

However, it is easy to set up a peer-reviewed journal even when the contents are highly unscientific (consider this creationist "scientific" journal which is indeed peer-reviewed – by creationists, of course). The kind of surface scanning typical for most uses of science as mere credibility enhancement does not care that any real scientist would dismiss the source as unreliable, since it is the persuasive effect on the lay-person that is intended.

One might argue together with C.P Scott that  "comment is free, but
facts are sacred" – there is a fundamental gulf between opinion and
hard facts, and while allowing free flow of opinion is good, allowing
the spread of untruths is bad. But in much of science the border between fact and opinion is annoyingly blurred. Experiments ideally produce hard facts beyond opinion, but in practice there is far too much room to interpret the experimental set-up and data. In medicine the facts are often epidemiological statistics inherently filled with assumptions about what is measured, as well as unavoidable statistical uncertainties. It is possible to misrepresent facts or commit logical fallacies (which the denialist papers did, in regards to the conclusions of the Lancet study and the population growth of South Africa) but there are many cases where different honest readers would disagree on even whether this had taken place. 

Were Medical Hypotheses to disappear it would hardly dent the amount of bad science already published in peer-reviewed journals, bad science published in other forums and people wilfully misusing such bad science to push bad policy. It would however remove a convenient place to put truly out-there ideas that one might want to be able to refer to. The best way of dealing with bad science is subjecting it to open criticism. This should be done both within the scientific profession but also when it appears outside science. This is clearly part of the duty to the public – a scientist can not claim that it is in their remit to criticize bad papers in scientific journals in their own field, but not to criticize misguided science claims in the public sphere.

Here new technology can help further this error-correcting demand. Science blogging is already allowing scientists to voice concerns with misuses of science and to engage with the public. Forums like Ben Goldacre's Bad Science are doing a valuable job in coordinating criticism. Online journals with comment functions allows in-situ criticism. Search tools can help by pointing out that papers have been withdrawn or subjected to rebuttals. Better online data curation might make it easier to check raw data and what accepted facts are. Review boards and idea futures could offer complementary alternatives to current peer review.

In the end, science works because it has error-correction methods. The same is true for open societies. Promoting scientific connoisseurship among the public and public scientific criticism from professional scientists might be the most effective way of ensuring that our shared ideas remain true and high quality.

Post Scriptum: Another interesting take on non-peer reviewed journals is Rejecta Mathematica, which publishes rejected mathematical papers together with cover letters explaining their rejection history.

Share on