Professionalism and clinical short answer question marking with machine learning.
Antoinette LamLydia LamCharlotte BlacketerRoger ParnisKyle FrankeMorganne WagnerDavid WangYiran TanLauren Oakden-RaynerSteve GallagherSeth W PerryJulio LicinioIan SymondsJosephine ThomasPaul DugganStephen BacchiPublished in: Internal medicine journal (2022)
Machine learning may assist in medical student evaluation. This study involved scoring short answer questions administered at three centres. Bidirectional encoder representations from transformers were particularly effective for professionalism question scoring (accuracy ranging from 41.6% to 92.5%). In the scoring of 3-mark professionalism questions, as compared with clinical questions, machine learning had a lower classification accuracy (P <ā0.05). The role of machine learning in medical professionalism evaluation warrants further investigation.