Login / Signup

Scoring reading parameters: An inter-rater reliability study using the MNREAD chart.

Karthikeyan BaskaranAntonio Filipe MacedoYingchen HeLaura Hernandez-MorenoTatiana QueirósJ Stephen MansfieldAurélie Calabrèse
Published in: PloS one (2019)
For MRS, inter-rater reliability is excellent, even considering the possibility of noisy and/or incomplete data collected in low-vision individuals. For CPS, inter-rater reliability is lower. This may be problematic, for instance in the context of multisite investigations or follow-up examinations. The NLME method showed better agreement with the raters than the SDev method for both reading parameters. Setting up consensual guidelines to deal with ambiguous curves may help improve reliability. While the exact definition of CPS should be chosen on a case-by-case basis depending on the clinician or researcher's motivations, evidence suggests that estimating CPS as the smallest print size sustaining about 80% of MRS would increase inter-rater reliability.
Keyphrases
  • clinical practice
  • machine learning
  • deep learning
  • data analysis