Login / Signup

Validity and Reproducibility of the Peer Assessment Rating Index Scored on Digital Models Using a Software Compared with Traditional Manual Scoring.

Arwa GeraShadi GeraMichel DalstraPaolo Maria CattaneoMarie A Cornelis
Published in: Journal of clinical medicine (2021)
The aim of this study was to assess the validity and reproducibility of digital scoring of the Peer Assessment Rating (PAR) index and its components using a software, compared with conventional manual scoring on printed model equivalents. The PAR index was scored on 15 cases at pre- and post-treatment stages by two operators using two methods: first, digitally, on direct digital models using Ortho Analyzer software; and second, manually, on printed model equivalents using a digital caliper. All measurements were repeated at a one-week interval. Paired sample t-tests were used to compare PAR scores and its components between both methods and raters. Intra-class correlation coefficients (ICC) were used to compute intra- and inter-rater reproducibility. The error of the method was calculated. The agreement between both methods was analyzed using Bland-Altman plots. There were no significant differences in the mean PAR scores between both methods and both raters. ICC for intra- and inter-rater reproducibility was excellent (≥0.95). All error-of-the-method values were smaller than the associated minimum standard deviation. Bland-Altman plots confirmed the validity of the measurements. PAR scoring on digital models showed excellent validity and reproducibility compared with manual scoring on printed model equivalents by means of a digital caliper.
Keyphrases
  • data analysis
  • low cost
  • study protocol