Exploring pooled analysis of pretested items to monitor the performance of medical students exposed to different curriculum designs.
Pedro Tadao Hamamoto FilhoPedro Luiz Toledo de Arruda LourençãoJoélcio Francisco AbbadeDario Cecílio FernandesJacqueline Teixeira CaramoriAngélica Maria BicudoPublished in: PloS one (2021)
Several methods have been proposed for analyzing differences between test scores, such as using mean scores, cumulative deviation, and mixed-effect models. Here, we explore the pooled analysis of retested Progress Test items to monitor the performance of first-year medical students who were exposed to a new curriculum design. This was a cross-sectional study of students in their first year of a medical program who participated in the annual interinstitutional Progress Tests from 2013 to 2019. We analyzed the performance of first-year students in the 2019 test and compared it with that of first-year students taking the test from 2013 to 2018 and encountering the same items. For each item, we calculated odds ratios with 95% confidence intervals; we also performed meta-analyses with fixed effects for each content area in the pooled analysis and presented the odds ratio (OR) with a 95% confidence interval (CI). In all, we used 63 items, which were divided into basic sciences, internal medicine, pediatrics, surgery, obstetrics and gynecology, and public health. Significant differences were found between groups in basic sciences (OR = 1.172 [CI95% 1.005 CI 1.366], p = 0.043) and public health (OR = 1.54 [CI95% CI 1.25-1.897], p < 0.001), which may reflect the characteristics of the new curriculum. Thus, pooled analysis of pretested items may provide indicators of different performance. This method may complement analysis of score differences on benchmark assessments.