Should Peer Review be Rejected?
In most academic disciplines academics devote considerable energies to trying to publish in prestigious journals. These journals are, almost invariably, peer reviewed journals. When an article is submitted their editors send this out to expert reviewers who report on it and, if the article is judged to be of sufficient quality by those referees – who typically report back a few months later – and by the editor (and perhaps an editorial board), then it will be published (often after revisions have been made). If not, as in most cases, the author is free to try to publish the article in another journal. As anyone who has participated in this process can attest, it is very time consuming and often frustrating. The best journals only publish a small percentage of submissions and so an author who is targeting such journals may often have to submit the article several times; and in fields where the convention is that one should only submit to one journal at a given time (almost all fields) they may sometimes find that it takes a year or longer to have their paper accepted somewhere. Not only is this process very time consuming, it is often capricious as the referees that one’s paper is forwarded to may not be competent to assess the article in question and may be biased in various ways against (or for) one’s article. For these sorts of reasons many academics have wondered if there might be a better way.
One alternative that has been suggested by Richard Smith, the former editor of the British Medical Journal is ‘… to publish everything and then let the world decide what is important’ [See http://breast-cancer-research.com/content/12/S4/S13]. In a sense this is already possible. Academics can post their papers on their own websites, link them to blogs and so on and then the world will make up its own mind. As things stand though, academics are unlikely just to do this, as the world is likely to conclude that their papers were not good enough to make it through the peer review process and find a home in a scholarly journal. Smith’s proposal would only have a chance of working if many academics agreed to abandon the peer review system and academics ceased to stigmatize papers that are not published in peer reviewed outlets. Could the ‘open slather’ approach work though? A disadvantage for readers is that they would be overwhelmed with information. At the moment I know which journals in my field contain articles that are likely to be worth reading, and which journals in my field contain articles that are likely to be a waste of time. This information guides my reading habits. But what if I did not have this information and was just presented with a deluge of papers. How would I decide what to read? I can’t read everything that is relevant to my work. In fact I can only read a minority of that material that is relevant to my work, so I will need some markers of quality to be guided by. I suspect that in the absence of journal reputation information I will be guided, for the most part by the reputation of authors and the number of times that papers are cited. So the consequences of such a system will generally be bad for starting out academics. At the moment an early career academic can aim to publish in a high quality journal and thereby attract attention to their work and so build up a reputation. If Smith’s proposal were to be adopted then this strategy would not work and their articles would often simply be ignored, regardless of the merits of these.
A refined version of Smith’s proposal has recently been suggested by Travis Saunders of ‘Science of Blogging’: http://scienceofblogging.com/time-for-a-new-type-of-peer-review/. Sauders’ suggestion is that journals should publish all submitted papers (presumably online), together with a set of scores that referees have given them for overall quality, methodology, chance of bias, and so on. If papers are revised and resubmitted then they would be re-scored and the revised papers would be published, together with their new scores. I see two problems with this proposed system. The first is that it may lead to a lot of authors withdrawing papers from journals when they receive low scores and submitting to other journals in the hope of receiving a higher score. Saunders does allow that a score could be increased if a paper is revised and resubmitted. However, authors are likely to suspect that the initial score will have an anchoring effect, influencing the revised score. If a referee gives my paper 2/100 then, after revision they may give it, say, 4/100 but they are unlikely to give it the 90/100 that I feel that I deserve and which I could perhaps get at another journal. This problem could be solved by sending resubmissions to new referees who are unaware of the old score. But if this is a possibility then some authors will simply make cosmetic revisions to papers and resubmit them many times until they receive a score that they are happy with. Another problem with this system is that the scores are not very useful information unless I know something about the ability of referees to score papers accurately. Why should I care if some referee who I have never heard of thinks that a particular paper is worth a particular score? As with Smith’s proposal, my suspicion is that it will lead to conservative reactions. I will be inclined to read those papers that are scored highly by referees who I have heard of and who have strong reputations, and will disregard other papers. This problem might be ameliorated if I knew that particular journals only selected appropriate referees for papers who were accurate judges of quality. But if I knew that then I would, in effect, be relying on the reputation of journals and this is something that Smith and Saunders seem to want to get away from.
It’s worth thinking about alternatives to peer review, which is problematic in many ways. However, I am unconvinced that the alternatives on offer – or at least the ones that I am aware of, and which I have discussed here – are improvements on peer review.