Login / Signup

Variable importance analysis with interpretable machine learning for fair risk prediction.

Yilin NingSiqi LiYih Yng NgMichael Yih Chong ChiaHan Nee GanLing TiahDesmond Renhao MaoWei Ming NgBenjamin Sieu-Hon LeongNausheen DoctorMarcus Eng Hock OngNan Liu
Published in: PLOS digital health (2024)
Machine learning (ML) methods are increasingly used to assess variable importance, but such black box models lack stability when limited in sample sizes, and do not formally indicate non-important factors. The Shapley variable importance cloud (ShapleyVIC) addresses these limitations by assessing variable importance from an ensemble of regression models, which enhances robustness while maintaining interpretability, and estimates uncertainty of overall importance to formally test its significance. In a clinical study, ShapleyVIC reasonably identified important variables when the random forest and XGBoost failed to, and generally reproduced the findings from smaller subsamples (n = 2500 and 500) when statistical power of the logistic regression became attenuated. Moreover, ShapleyVIC reasonably estimated non-significant importance of race to justify its exclusion from the final prediction model, as opposed to the race-dependent model from the conventional stepwise model building. Hence, ShapleyVIC is robust and interpretable for variable importance assessment, with potential contribution to fairer clinical risk prediction.
Keyphrases
  • machine learning
  • risk assessment
  • big data