Skip to content

Should Peer Review be Rejected?

In most academic disciplines academics devote considerable energies to trying to publish in prestigious journals. These journals are, almost invariably, peer reviewed journals. When an article is submitted their editors send this out to expert reviewers who report on it and, if the article is judged to be of sufficient quality by those referees – who typically report back a few months later – and by the editor (and perhaps an editorial board), then it will be published (often after revisions have been made). If not, as in most cases, the author is free to try to publish the article in another journal. As anyone who has participated in this process can attest, it is very time consuming and often frustrating. The best journals only publish a small percentage of submissions and so an author who is targeting such journals may often have to submit the article several times; and in fields where the convention is that one should only submit to one journal at a given time (almost all fields) they may sometimes find that it takes a year or longer to have their paper accepted somewhere. Not only is this process very time consuming, it is often capricious as the referees that one’s paper is forwarded to may not be competent to assess the article in question and may be biased in various ways against (or for) one’s article. For these sorts of reasons many academics have wondered if there might be a better way.

 One alternative that has been suggested by Richard Smith, the former editor of the British Medical Journal is ‘… to publish everything and then let the world decide what is important’ [See http://breast-cancer-research.com/content/12/S4/S13]. In a sense this is already possible. Academics can post their papers on their own websites, link them to blogs and so on and then the world will make up its own mind. As things stand though, academics are unlikely just to do this, as the world is likely to conclude that their papers were not good enough to make it through the peer review process and find a home in a scholarly journal. Smith’s proposal would only have a chance of working if many academics agreed to abandon the peer review system and academics ceased to stigmatize papers that are not published in peer reviewed outlets. Could the ‘open slather’ approach work though? A disadvantage for readers is that they would be overwhelmed with information. At the moment I know which journals in my field contain articles that are likely to be worth reading, and which journals in my field contain articles that are likely to be a waste of time. This information guides my reading habits. But what if I did not have this information and was just presented with a deluge of papers. How would I decide what to read? I can’t read everything that is relevant to my work.  In fact I can only read a minority of that material that is relevant to my work, so I will need some markers of quality to be guided by. I suspect that in the absence of journal reputation information I will be guided, for the most part by the reputation of authors and the number of times that papers are cited. So the consequences of such a system will generally be bad for starting out academics. At the moment an early career academic can aim to publish in a high quality journal and thereby attract attention to their work and so build up a reputation. If Smith’s proposal were to be adopted then this strategy would not work and their articles would often simply be ignored, regardless of the merits of these.

 A refined version of Smith’s proposal has recently been suggested by Travis Saunders of ‘Science of Blogging’: http://scienceofblogging.com/time-for-a-new-type-of-peer-review/. Sauders’ suggestion is that journals should publish all submitted papers (presumably online), together with a set of scores that referees have given them for overall quality, methodology, chance of bias, and so on. If papers are revised and resubmitted then they would be re-scored and the revised papers would be published, together with their new scores. I see two problems with this proposed system. The first is that it may lead to a lot of authors withdrawing papers from journals when they receive low scores and submitting to other journals in the hope of receiving a higher score. Saunders does allow that a score could be increased if a paper is revised and resubmitted. However, authors are likely to suspect that the initial score will have an anchoring effect, influencing the revised score. If a referee gives my paper 2/100 then, after revision they may give it, say, 4/100 but they are unlikely to give it the 90/100 that I feel that I deserve and which I could perhaps get at another journal. This problem could be solved by sending resubmissions to new referees who are unaware of the old score. But if this is a possibility then some authors will simply make cosmetic revisions to papers and resubmit them many times until they receive a score that they are happy with. Another problem with this system is that the scores are not very useful information unless I know something about the ability of referees to score papers accurately. Why should I care if some referee who I have never heard of thinks that a particular paper is worth a particular score? As with Smith’s proposal, my suspicion is that it will lead to conservative reactions. I will be inclined to read those papers that are scored highly by referees who I have heard of and who have strong reputations, and will disregard other papers.  This problem might be ameliorated if I knew that particular journals only selected appropriate referees for papers who were accurate judges of quality. But if I knew that then I would, in effect, be relying on the reputation of journals and this is something that Smith and Saunders seem to want to get away from.

 It’s worth thinking about alternatives to peer review, which is problematic in many ways. However, I am unconvinced that the alternatives on offer – or at least the ones that I am aware of, and which I have discussed here – are improvements on peer review.

 

Share on

3 Comment on this post

  1. Over the past week I have been emailed by a journal wanting me to be an editor. It would have been flattering except that the email is obviously automated, repeated every few days and the journal is in a field I have never worked in, yet it says “We are aware of your reputation for quality of research and trustworthiness in the field of “Molecular Cloning & Genetic Recombination” “.

    I think Saunders description of the publishing field explains this nicely. Low-end journals where most papers can be published are good for academics wanting to fill their CV and good for publishers who get more to add to their journal bundles. They also require editors to get some academic credibility, and that is also a nice way of getting CV padding for academics. Everybody wins except science.

    The problem is aligning the incentives so that academics are motivated to produce the best research possible (and publish the best explanations of it) rather than just producing a lot of research. We just need to 1) find a system of detecting/rewarding reviewers that do quality assessment in a high quality manner (perhaps a reviewer quality and trustworthiness ranking system? tip jars?), 2) making the grants bodies (and other institutions academics are motivated to impress) reward for high quality publications and high quality reviewing, 3) giving these bodies incentives for maximizing good research.

    Saunder’s suggestion might be a start, but in order for it to take off we need to solve 1-3.

  2. An idea would be to run journals like proper companies. If referees were hired, paid to read rapidly and let go if not giving quality feedback, then the turnaround would be much quicker and quality would improve. The core of science is now run on a hobby basis, while paying great dividends to the publishers. Some of those dividends could be reinvested in a proper organization for refereeing, while maintaining a reasonable bottom-line for publishing houses. It would also easily be a great addition to anyone’s CV to have been a hired referee for a good journal.

  3. Christian Munthe

    My own experience is that in many cases, problems with PR stem from the fact that it is not adequately handled by the journal (e.g. regarding delivery time-lines, assignment of referee duties to suboptimally competent people, etc.). In many journals, stricter, more consistent and professionally handled PR procedures is what is needed, rather than less of the same. But Lars above points to a problem for such stringency and professionalism. The fact that many of us are expected to do referee work and so on as an extra on top of everything else. Some universities credit referee assignments as part of the work-load of a researcher, but far from all. So in this climate with tougher and tougher pressure to publish and attract external funds, PR may be expected to deteriorate further. A solution would be if commercially run journals started to pay for referee work, but that is not to be expected to happen anytime soon. Another would be if all universities started to count referee work as a clear part of the production expected of someone with research as a part of one’s work assignments.

    The idea of publishing whatever crap people send in and then have the debate afterwords is a completely dead end and will, furthermore, transfer far too much publishing power to the editors. Post publication debate is, of course, a good thing. But that we can have anyway!

Comments are closed.