Psychometric Equivalence of the Computerized and Original Halstead Category Test Using a Matched Archival Sample.
William F GoetteAndrew L SchmittJanice NiciPublished in: Assessment (2019)
Objective: Investigate the equivalence of several psychometric measures between the traditional Halstead Category Test (HCT-Original Version [OV]) and the computer-based Halstead Category Test (HCT-Computerized Version [CV]). Method: Data were from a diagnostically heterogeneous, archival sample of 211 adults administered either the HCT by computer (n = 105) or cabinet (n = 106) as part of a neuropsychological evaluation. Groups were matched on gender, race, education, Full Scale Intelligence Quotient, and Global Neuropsychological Deficit Score. Confirmatory factor analysis was used to examine structural equivalence. Score, variability, and reliability equivalency were also examined. Differential item and test functioning under a Rasch model were examined. Results: An identified factor structure from research of the HCT-OV fit the HCT-CV scores adequately: χ2(4) = 8.83, p = .07; root mean square error of approximation = 0.10 [0.00, 0.20]; standardized root mean residual = 0.03; comparative fit index = 0.99. Total scores and variability of subtest scores were not consistently equivalent between the two administration groups. Reliability estimates were, however, similar and adequate for clinical practice: 0.96 for HCT-OV and 0.97 for HCT-CV. About 17% of items showed possible differential item functioning, though just three of these items were statistically significant. Differential test functioning revealed expected total score differences of <1% between versions. Conclusion: The results of this study suggest that the HCT-CV functions similar to the HCT-OV with there being negligible differences in expected total scores between these versions. The HCT-CV demonstrated good psychometric properties, particularly reliability and construct validity consistent with previous literature. Further study is needed to generalize these findings and to further examine the equivalency of validity evidence between versions.