Considerations for addressing bias in artificial intelligence for health equity.
Michael D AbramoffMichelle E TarverNilsa Loyo-BerriosSylvia TrujilloDanton CharZiad ObermeyerMalvina B Eydelmannull nullWilliam H MaiselPublished in: NPJ digital medicine (2023)
Health equity is a primary goal of healthcare stakeholders: patients and their advocacy groups, clinicians, other providers and their professional societies, bioethicists, payors and value based care organizations, regulatory agencies, legislators, and creators of artificial intelligence/machine learning (AI/ML)-enabled medical devices. Lack of equitable access to diagnosis and treatment may be improved through new digital health technologies, especially AI/ML, but these may also exacerbate disparities, depending on how bias is addressed. We propose an expanded Total Product Lifecycle (TPLC) framework for healthcare AI/ML, describing the sources and impacts of undesirable bias in AI/ML systems in each phase, how these can be analyzed using appropriate metrics, and how they can be potentially mitigated. The goal of these "Considerations" is to educate stakeholders on how potential AI/ML bias may impact healthcare outcomes and how to identify and mitigate inequities; to initiate a discussion between stakeholders on these issues, in order to ensure health equity along the expanded AI/ML TPLC framework, and ultimately, better health outcomes for all.
Keyphrases
- artificial intelligence
- healthcare
- machine learning
- big data
- deep learning
- public health
- health information
- mental health
- health promotion
- ejection fraction
- type diabetes
- newly diagnosed
- metabolic syndrome
- chronic pain
- human health
- palliative care
- prognostic factors
- drinking water
- weight loss
- patient reported outcomes