Friday, July 2, 2004

Trahison des Clercs - AntiPatterns of Peer Review

Defects in the reviewing process can be attributed to poor editing practice, poor reviewing practice, or to more complex interaction effects within the so-called peer review system.

Myth of the super-reviewer

A thorough review of a complex paper would require deep familiarity with prior work - not just the work referenced by the authors, but also relevant work ignored by them. It would require reasonable awareness of the authors' own work, to check that they were not merely rehashing previous stuff, and to check that they had addressed any objections or other issues arising from their own previously published work. It would require confident mastery of the notations and techniques (such as statistics) deployed in the paper. A super-reviewer, if one existed, would be able to detect inconsistencies and gaps in the argument, to surface conflicts with established terminologies or theories, and to anticipate a wide range of objections from various sources.

The myth of the super-reviewer distorts the behaviour of editors and programme chairs (who expect each reviewer to cover all aspects of a paper) and reviewers themselves (who may feel unable to reject papers without adequate justification).

Note that the software industry, for all its faults, has developed techniques for peer review that overcome some of these problems, and allow good use to be made of the actual knowledge and capabilities of a team of reviewers. Sadly, these techniques are not widely enough practised, even by software engineers.

Positive thinking

Perhaps aware of the bad feelings that accompany a destructive review, many reviewers attempt to identify the positive aspects of the paper as well as the areas for improvement. These attempts are worthy, and usually come from a desire to be helpful and positive, but they may lead to the false impression that the paper is better than it actually is, and reduces the force of genuine intellectual objections.

Single-cycle review

In many situations, there is only a single-cycle review. The revised paper doesn't go back to the (same) reviewers, and the editor cannot check in detail whether the revisions meet the requirements of the reviewers. There is little incentive for authors to engage seriously with the reviewers' comments.

My preference as a reviewer is to provide a detailed account of the shortcomings and imperfections of a paper, and indicate some things that would have to be improved before publication, as well as some further suggestions. It is galling to see my reviewing work wasted, as the paper subsequently gets published without any sign that my requirements or suggestions have been heeded.

Casual review

As reviewers learn the imperfections of the system, it becomes increasingly difficult to maintain a conscientious stance. If detailed comments are likely to be ignored, if conditional acceptance is converted by overworked editors into unconditional acceptance, then perhaps it's not worth the effort to review a paper properly.

Automatic review

The quickest way to review papers is according to a pre-programmed set of responses. 

  • If it's hard-going, then it might be profound. I don't have the time to work out whether it makes any sense, and to find enough flaws to justify rejecting it, so I'd better accept it. 
  • The paper goes outside my specialist area, so I don't have sufficient grounds for rejecting it.
  • The paper doesn't conform to my preconceptions about subject or method, so it must be rubbish.
  • The paper is obscure, so it must be rubbish.
  • I agree/disagree with the findings, so the paper must be good/bad.

Tactical review

Reviewers may also sometimes allow their own personal agenda to influence their reviews. 

  • If this gets published, it will make my work look good in comparison.
  • If this gets published, it will undermine my work. 
  • If this gets published, it will create a precedent for future publication in this area. 
  • If this gets published, it will foreclose further publication in this area.
  • I'm outraged that this research got funding in the first place. I hope that no more funding gets wasted on this stuff. So although the paper itself is okay, I shall reject it, because I reject the research programme that sponsored it.

Of course, reviewers are unlikely to acknowledge such thoughts, even to themselves. 


Related post: Peer Review in the Dock (August 2008)

Originally published at http://www.veryard.com/kmoi/dryasdust.htm

No comments:

Post a Comment