AI pitfalls and what not to do: Mitigating bias in AI.
Judy Wawira GichoyaKaesha ThomasLeo Anthony CeliNabile SafdarImon BanerjeeJohn D BanjaLaleh Seyyed-KalantariHari TrivediSaptarshi PurkayasthaPublished in: The British journal of radiology (2023)
Various forms of artificial intelligence applications are being deployed and used in many healthcare systems. As the use of these applications increases, we are learning the failures of these models and how they can perpetuate bias. With these new lessons, we need to prioritize bias evaluation and mitigation for radiology applications; all the while not ignoring the impact of changes in the larger enterprise AI deployment which may have downstream impact on performance of AI models. In this paper, we provide an updated review of known pitfalls causing AI bias and discuss strategies for mitigating these biases within the context of AI deployment in the larger healthcare enterprise. We describe these pitfalls by framing them in the larger AI lifecycle from problem definition, dataset selection and curation, model training and deployment emphasizing that bias exists across a spectrum and is a sequela of a combination of both human and machine factors.