Revisiting the Effect of Varying the Number of Response Alternatives in Clinical Assessment: Evidence From Measuring ADHD Symptoms.
Dexin ShiE Rebekah SiceloffRebeca E CastellanosRachel M BridgesZhehan JiangKate FloryKari BensonPublished in: Assessment (2020)
This study illustrated the effect of varying the number of response alternatives in clinical assessment using a within-participant, repeated-measures approach. Participants reported the presence of current attention-deficit/hyperactivity disorder symptoms using both a binary and a polytomous (4-point) rating scale across two counterbalanced administrations of the Current Symptoms Scale (CSS). Psychometric properties of the CSS were examined using (a) self-reported binary, (b) self-reported 4-point ratings obtained from each administration of the CSS, and (c) artificially dichotomized responses derived from observed 4-point ratings. Under the same ordinal factor analysis model, results indicated that the number of response alternatives affected item parameter estimates, standard errors, goodness of fit indices, individuals' test scores, and reliability of the test scores. With fewer response alternatives, the precision of the measurement decreased, and the power of using the goodness-of-fit indices to detect model misfit decreased. These findings add to recent research advocating for the inclusion of a large number of response alternatives in the development of clinical assessments and further suggest that researchers should be cautious about reducing the number of response categories in data analysis.