Thursday, April 26, 2007

Peer Review

In the movie Jaws, Hooper, Quint and Brody try to one-up each other comparing battle scars. Scientists engage in analogous grandstanding; we try to outdo each other with horror stories of peer review. For many of us, this is one of the most frustrating parts of science. One works hard designing experiments, collecting data and writing a manuscript. Then one submits it to a journal for peer review, and it disappears, sometimes for periods longer than it took to complete the work in the first place. In the meantime, your work on the subject often languishes in limbo, your CV remains unaccredited or contains a line with the explanation (submitted), and your personal gratification remains unsatisfied. I find myself selecting journals for publication based mainly on their "turn-around time".

What is most surprising to me is that many scientists shirk peer review when asked to perform it. I recall one colleague who suggested the following strategy: after acquiescing to a request for peer review, ignore it until after the deadline, until the editor writes asking where it is. Only then should you perform the task and submit your comments. The colleague reasoned, perhaps then the editors would leave you alone and not request future reviews from you. The strategy, in addition to being appalling and uncollegial, is particularly shortsighted. If the strategy spread, it eventually would be applied to your own submissions as well.

In a letter to PLoS Biology, Marc Hauser and Ernst Fehr propose to incentivise the peer review process by punishing transgressors. Here editors would keep databases on when papers were sent to reviewers and when the reviews were returned. The late reviews would be punished accordingly: "For every day since receipt of the manuscript for review plus the number of days past the deadline, the reviewer's next personal submission to the journal will be held in editorial limbo for twice as long before it is sent for review."

Naturally the system is not without bugs. Who is punished when multi-authored papers are submitted? What happens if one of the authors is a timely reviewer and another is a slacker? Hauser and Fehr suggest penalizing only the primary corresponding author, but I think that might easily be gamed by making the least penalized author the correspondent. Also slackers could avoid penalties by refusing requests for reviews. Hauser and Fehr suggest penalizing them by adding a one-week delay their own next submission. However, this also penalizes those who turn down reviews because of time constraints and or because they feel unqualified to do the service.

Hauser and Fehr's suggestion is interesting. Something surely must be done about the broken review process. I would object to monetizing the review process, as some commenters have suggested, but penalizing slackers with embargoes seems like a workable solution. I also like the open peer review system where submitted papers are posted for comments from scientists at large. The internet provides many new avenues to fix the peer review system and I hope journal editors consider them seriously.


  1. I think the idea has some merit, but it is interesting to see what other fields have done to get around the unaccredited reviewer problem. I think that (at least one of) the major chemistry journals publish an end-of-year list of reviewers with their number of papers reviewed, how many were on time etc. This provides an incentive without breaking confidentiality or giving direct compensation. I have also heard of medical journals which after completing a review within a deadline allow the referee to choose either Amazon gift token or a donation to Medcin Sans Frontiers or similar charity.

  2. Is peer review really that annoying for the reviewers? If it is can't they just turn it down? It seems to me like there should be some people out of all the scientists who would be willing to take the time and review a paper every few months. But then again, I've never had to do it (I'm only an undergrad), so maybe it is more of a pain in the ass than I'm imagining.

  3. Peer review takes much effort and you usually don't get much reward for it (most often your comments are anonymous). Usually it takes me a full day to do a proper review. Given the constraints on your time, this can be a considerable sacrifice. However it is critical to the practice of science. Of course, you can turn it down, but those folks are basically freeloading off the efforts of others. They get the benefits (peer reviewed papers) without cost (performing peer reviews).

  4. Intentionally holding someone's work in limbo is unacceptable and only adds to the problem.

    Here is what I think is a *much* worse problem with peer review. You review a paper (on time!), provide detailed comments, and deem the paper unacceptable as written (or bogus generally). Then it shows up in your mailbox again, from a different journal, with no changes. Or worse, it shows up in print in another journal with no changes. This is extremely frustrating and undermines the utility of peer review.

    I suggest that authors should be required to submit previous reviews to any journal if they are trying the same paper in an alternate venue. This would cut down on a lot of wasted time and should improve the quality of the science.

    Reviewers are not the problem in many cases, the authors (who are admittedly under pressure to publish for many reasons) have a huge role in making the system function.

  5. NSF now considers "broader impact" in awarding grants. Most claims made in this category ("mentored 15 minority undergrads") are hard to check, but reviewer performance data could be sent by journals directly to NSF and given to grant panels. This would be a greater incentive than faster turnaround on papers.

    Ford Denison
    "this week in evolution"