Madhusudan Katti over at Reconciliation Ecology picked up the thread about the issue of slow peer review. He makes some interesting comments, particularly regarding the time constraints at teaching-orientated institutions. I've thought a bit more about the issue and have a few more points to make regarding possible solutions.
What are the alternatives?
1. Do nothing.
-Is there really a problem? One could argue that time-to-publication times are faster now than ever, particularly with internet submissions. Being relatively new to the science publication business, I don't have any insight into what it was like back in the day. Perhaps some
old-timers senior researchers could chime in on this.
2. Financial incentives
-Some have suggested paying reviewers. I think this is a non-starter. Most scientific societies are already strapped for cash. Besides it just introduces a whole new layer of potential conflicts of interest. Besides aren't we all getting paid already by our institutional salaries?
3. Service Counts
-Madhu suggests increasing the impact of service (and peer review) in the tenure review process. Great idea. However, someone (a tenured faculty who had served on many review committees) once said that tenure counts for ~10% of the decision. Someone else said that service counts for nothing! Whatever the impact of service, it is part of an institution's culture, and institutions are notoriously slow to change.
-The original PLoS article suggested time penalties for late reviews. This is probably unworkable as well. Slow does not equate poor quality. James Crow wrote a Perspective in Genetics about peer review.
"Sewall Wright was a particularly thorough reviewer. When he received a manuscript for review, he typically dropped other activities and went over the copy in great detail. Usually this involved his redoing all the calculations and reanalyzing the data. Alex and I were convinced that he was spending too much time on other people's data, at the price of not getting his own more important work done. For that reason, we employed him sparingly, only where his unique insights were essential.
A review that stands out in my mind involved a study of quantitative traits in rodents. As usual, Wright reanalyzed all the data. He suggested a large number of changes, the most significant of which was to suggest a scale transformation of the data, which not only greatly simplified the interpretation, but also led to the opposite conclusion. The author made almost all of the suggested alterations and obligingly reversed the conclusion."I suppose by Hauser and Fehr's standards, Wright's papers would never be published.
5. An interesting suggestion by Gavin Sherlock of Stanford is to match reviewers.
"An alternative system, which doesn't require holding articles in editorial limbo (which seems not to be in the editorial spirit) is to choose the reviewers of the article based on the length of time those reviewers typically take, and the length of time that the author usually takes, matching them up. Thus, if you usually take 6 weeks on average to review a manuscript, your manuscripts will be sent to reviewers that usually take that long too. On the other hand, if you normally review within a few days, you manuscripts will be matched up with suitably rapid reviewers."
This is an interesting option that deserves consideration, but it might turn out unworkable in the long run as it requires extensive databases on reviewer habits.
6. Greater Editorial Action
-Perhaps editors could be more responsible for review timing. I don't think this really has merit. Editors are thinly stretched as is, and chasing reviewers is one of the worst aspects of the job.
7. Open Access Reviews
-Nature attempted an experiment in Open Peer Review. While it was not an unqualified success, perhaps it takes time to change the inertia of the peer review culture. Of all the ideas I've heard, I think this one probably has the most merit. It would increase the democratization of science as well as make the review process a bit more transparent.