Login / Signup

Construct validity of multiple mini interviews - Investigating the role of stations, skills, and raters using Bayesian G-theory.

Simon M BreilBoris ForthmannAnike Hertel-WaszakHelmut AhrensBritta BrouwerEva SchönefeldBernhard MarschallMitja D Back
Published in: Medical teacher (2019)
Background: One popular procedure in the medical student selection process are multiple mini-interviews (MMIs), which are designed to assess social skills (e.g., empathy) by means of brief interview and role-play stations. However, it remains unclear whether MMIs reliably measure desired social skills or rather general performance differences that do not depend on specific social skills. Here, we provide a detailed investigation into the construct validity of MMIs, including the identification and quantification of performance facets (social skill-specific performance, station-specific performance, general performance) and their relations with other selection measures.Methods: We used data from three MMI samples (N = 376 applicants, 144 raters) that included six interview and role-play stations and multiple assessed social skills.Results: Bayesian generalizability analyses show that, the largest amount of reliable MMI variance was accounted for by station-specific and general performance differences between applicants. Furthermore, there were low or no correlations with other selection measures.Discussion: Our findings suggest that MMI ratings are less social skill-specific than originally conceptualized and are due more to general performance differences (across and within-stations). Future research should focus on the development of skill-specific MMI stations and on behavioral analyses on the extents to which performance differences are based on desirable skills versus undesired aspects.
Keyphrases
  • healthcare
  • mental health
  • medical students
  • deep learning
  • data analysis