Login / Signup

A Comparative Study of Item Response Theory Models for Mixed Discrete-Continuous Responses.

Cengiz ZopluogluJ R Lockwood
Published in: Journal of Intelligence (2024)
Language proficiency assessments are pivotal in educational and professional decision-making. With the integration of AI-driven technologies, these assessments can more frequently use item types, such as dictation tasks, producing response features with a mixture of discrete and continuous distributions. This study evaluates novel measurement models tailored to these unique response features. Specifically, we evaluated the performance of the zero-and-one-inflated extensions of the Beta, Simplex, and Samejima's Continuous item response models and incorporated collateral information into the estimation using latent regression. Our findings highlight that while all models provided highly correlated results regarding item and person parameters, the Beta item response model showcased superior out-of-sample predictive accuracy. However, a significant challenge was the absence of established benchmarks for evaluating model and item fit for these novel item response models. There is a need for further research to establish benchmarks for evaluating the fit of these innovative models to ensure their reliability and validity in real-world applications.
Keyphrases
  • decision making
  • machine learning
  • autism spectrum disorder
  • artificial intelligence
  • deep learning
  • smoking cessation