Clinical Accuracy, Relevance, Clarity, and Emotional Sensitivity of Large Language Models to Surgical Patient Questions: Cross-Sectional Study.
Mert Marcel DagliFelix Conrad OettlJaskeerat GujralKashish MalhotraYohannes G GhenbotJang W YoonAli Kemal OzturkWilliam C WelchPublished in: JMIR formative research (2024)
This cross-sectional study evaluates the clinical accuracy, relevance, clarity, and emotional sensitivity of responses to inquiries from patients undergoing surgery provided by large language models (LLMs), highlighting their potential as adjunct tools in patient communication and education. Our findings demonstrated high performance of LLMs across accuracy, relevance, clarity, and emotional sensitivity, with Anthropic's Claude 2 outperforming OpenAI's ChatGPT and Google's Bard, suggesting LLMs' potential to serve as complementary tools for enhanced information delivery and patient-surgeon interaction.