Auditing the AI auditors: A framework for evaluating fairness and bias in high stakes AI predictive models.
Richard N LandersTara S BehrendPublished in: The American psychologist (2022)
Researchers, governments, ethics watchdogs, and the public are increasingly voicing concerns about unfairness and bias in artificial intelligence (AI)-based decision tools. Psychology's more-than-a-century of research on the measurement of psychological traits and the prediction of human behavior can benefit such conversations, yet psychological researchers often find themselves excluded due to mismatches in terminology, values, and goals across disciplines. In the present paper, we begin to build a shared interdisciplinary understanding of AI fairness and bias by first presenting three major lenses, which vary in focus and prototypicality by discipline, from which to consider relevant issues: (a) individual attitudes, (b) legality, ethicality, and morality, and (c) embedded meanings within technical domains. Using these lenses, we next present psychological audits as a standardized approach for evaluating the fairness and bias of AI systems that make predictions about humans across disciplinary perspectives. We present 12 crucial components to audits across three categories: (a) components related to AI models in terms of their source data, design, development, features, processes, and outputs, (b) components related to how information about models and their applications are presented, discussed, and understood from the perspectives of those employing the algorithm, those affected by decisions made using its predictions, and third-party observers, and (c) meta-components that must be considered across all other auditing components, including cultural context, respect for persons, and the integrity of individual research designs used to support all model developer claims. (PsycInfo Database Record (c) 2022 APA, all rights reserved).