Patients' Perceptions Toward Human-Artificial Intelligence Interaction in Health Care: Experimental Study.
Pouyan EsmaeilzadehTala MirzaeiSpurthy DharanikotaPublished in: Journal of medical Internet research (2021)
The results imply that incompatibility with instrumental, technical, ethical, or regulatory values can be a reason for rejecting AI applications in health care. Thus, there are still various risks associated with implementing AI applications in diagnostics and treatment recommendations for patients with both acute and chronic illnesses. The concerns are also evident if the AI applications are used as a recommendation system under physician experience, wisdom, and control. Prior to the widespread rollout of AI, more studies are needed to identify the challenges that may raise concerns for implementing and using AI applications. This study could provide researchers and managers with critical insights into the determinants of individuals' intention to use AI clinical applications. Regulatory agencies should establish normative standards and evaluation guidelines for implementing AI in health care in cooperation with health care institutions. Regular audits and ongoing monitoring and reporting systems can be used to continuously evaluate the safety, quality, transparency, and ethical factors of AI clinical applications.
Keyphrases
- artificial intelligence
- healthcare
- machine learning
- big data
- deep learning
- primary care
- quality improvement
- endothelial cells
- liver failure
- emergency department
- newly diagnosed
- ejection fraction
- transcription factor
- chronic kidney disease
- hepatitis b virus
- combination therapy
- drug induced
- respiratory failure
- health insurance
- patient reported
- affordable care act
- electronic health record
- pluripotent stem cells