Login / Signup

The ability of different peer review procedures to flag problematic publications.

S P J M Serge HorbachWillem Halffman
Published in: Scientometrics (2018)
There is a mounting worry about erroneous and outright fraudulent research that gets published in the scientific literature. Although peer review's ability to filter out such publications is contentious, several peer review innovations attempt to do just that. However, there is very little systematic evidence documenting the ability of different review procedures to flag problematic publications. In this article, we use survey data on peer review in a wide range of journals to compare the retraction rates of specific review procedures, using the Retraction Watch database. We were able to identify which peer review procedures were used since 2000 for 361 journals, publishing a total of 833,172 articles, of which 670 were retracted. After addressing the dual character of retractions, signalling both a failure to identify problems prior to publication, but also the willingness to correct mistakes, we empirically assess review procedures. With considerable conceptual caveats, we were able to identify peer review procedures that seem able to detect problematic research better than others. Results were verified for disciplinary differences and variation between reasons for retraction. This leads to informed recommendations for journal editors about strengths and weaknesses of specific peer review procedures, allowing them to select review procedures that address issues most relevant to their field.
Keyphrases
  • systematic review
  • randomized controlled trial
  • cross sectional
  • deep learning
  • big data
  • artificial intelligence
  • electronic health record