Development and validation of an optimized prediction of mortality for candidates awaiting liver transplantation.
Dimitris BertsimasJerry KungNikolaos TrichakisYuchen WangRyutaro HiroseParsia A VagefiPublished in: American journal of transplantation : official journal of the American Society of Transplantation and the American Society of Transplant Surgeons (2018)
Since 2002, the Model for End-Stage Liver Disease (MELD) has been used to rank liver transplant candidates. However, despite numerous revisions, MELD allocation still does not allow for equitable access to all waitlisted candidates. An optimized prediction of mortality (OPOM) was developed (http://www.opom.online) utilizing machine-learning optimal classification tree models trained to predict a candidate's 3-month waitlist mortality or removal utilizing the Standard Transplant Analysis and Research (STAR) dataset. The Liver Simulated Allocation Model (LSAM) was then used to compare OPOM to MELD-based allocation. Out-of-sample area under the curve (AUC) was also calculated for candidate groups of increasing disease severity. OPOM allocation, when compared to MELD, reduced mortality on average by 417.96 (406.8-428.4) deaths every year in LSAM analysis. Improved survival was noted across all candidate demographics, diagnoses, and geographic regions. OPOM delivered a substantially higher AUC across all disease severity groups. OPOM more accurately and objectively prioritizes candidates for liver transplantation based on disease severity, allowing for more equitable allocation of livers with a resultant significant number of additional lives saved every year. These data demonstrate the potential of machine learning technology to help guide clinical practice, and potentially guide national policy.