Evaluation of artificial intelligence clinical applications: Detailed case analyses show value of healthcare ethics approach in identifying patient care issues.
Wendy A RogersHeather DraperStacy M CarterPublished in: Bioethics (2021)
This paper is one of the first to analyse the ethical implications of specific healthcare artificial intelligence (AI) applications, and the first to provide a detailed analysis of AI-based systems for clinical decision support. AI is increasingly being deployed across multiple domains. In response, a plethora of ethical guidelines and principles for general AI use have been published, with some convergence about which ethical concepts are relevant to this new technology. However, few of these frameworks are healthcare-specific, and there has been limited examination of actual AI applications in healthcare. Our ethical evaluation identifies context- and case-specific healthcare ethical issues for two applications, and investigates the extent to which the general ethical principles for AI-assisted healthcare expressed in existing frameworks capture what is most ethically relevant from the perspective of healthcare ethics. We provide a detailed description and analysis of two AI-based systems for clinical decision support (Painchek® and IDx-DR). Our results identify ethical challenges associated with potentially deceptive promissory claims, lack of patient and public involvement in healthcare AI development and deployment, and lack of attention to the impact of AIs on healthcare relationships. Our analysis also highlights the close connection between evaluation and technical development and reporting. Critical appraisal frameworks for healthcare AIs should include explicit ethical evaluation with benchmarks. However, each application will require scrutiny across the AI life-cycle to identify ethical issues specific to healthcare. This level of analysis requires more attention to detail than is suggested by current ethical guidance or frameworks.