Clinical use of artificial intelligence requires AI-capable organizations.
Laurie Lovett NovakRegina G RussellKim V GarveyMehool PatelKelly Jean Thomas CraigJane SnowdonBonnie MillerPublished in: JAMIA open (2023)
Artificial intelligence-based algorithms are being widely implemented in health care, even as evidence is emerging of bias in their design, problems with implementation, and potential harm to patients. To achieve the promise of using of AI-based tools to improve health, healthcare organizations will need to be AI-capable, with internal and external systems functioning in tandem to ensure the safe, ethical, and effective use of AI-based tools. Ideas are starting to emerge about the organizational routines, competencies, resources, and infrastructures that will be required for safe and effective deployment of AI in health care, but there has been little empirical research. Infrastructures that provide legal and regulatory guidance for managers, clinician competencies for the safe and effective use of AI-based tools, and learner-centric resources such as clear AI documentation and local health ecosystem impact reviews can help drive continuous improvement.
Keyphrases
- artificial intelligence
- healthcare
- machine learning
- big data
- deep learning
- mental health
- end stage renal disease
- newly diagnosed
- public health
- primary care
- chronic kidney disease
- health information
- transcription factor
- human health
- global health
- systematic review
- ejection fraction
- risk assessment
- prognostic factors
- meta analyses
- health insurance
- decision making
- nursing students
- quality improvement