A joint fairness model with applications to risk predictions for underrepresented populations.
Hyungrok DoShinjini NandiPreston PutzelPadhraic SmythJudy ZhongPublished in: Biometrics (2022)
In data collection for predictive modeling, underrepresentation of certain groups, based on gender, race/ethnicity, or age, may yield less accurate predictions for these groups. Recently, this issue of fairness in predictions has attracted significant attention, as data-driven models are increasingly utilized to perform crucial decision-making tasks. Existing methods to achieve fairness in the machine learning literature typically build a single prediction model in a manner that encourages fair prediction performance for all groups. These approaches have two major limitations: (i) fairness is often achieved by compromising accuracy for some groups; (ii) the underlying relationship between dependent and independent variables may not be the same across groups. We propose a joint fairness model (JFM) approach for logistic regression models for binary outcomes that estimates group-specific classifiers using a joint modeling objective function that incorporates fairness criteria for prediction. We introduce an accelerated smoothing proximal gradient algorithm to solve the convex objective function, and present the key asymptotic properties of the JFM estimates. Through simulations, we demonstrate the efficacy of the JFM in achieving good prediction performance and across-group parity, in comparison with the single fairness model, group-separate model, and group-ignorant model, especially when the minority group's sample size is small. Finally, we demonstrate the utility of the JFM method in a real-world example to obtain fair risk predictions for underrepresented older patients diagnosed with coronavirus disease 2019 (COVID-19).