Login / Signup

A Qualitative Evaluation of ChatGPT4 and PaLM2's Response to Patient's Questions Regarding Age-Related Macular Degeneration.

George Adrian MunteanAnca MargineanAdrian GrozaIoana DamianSara Alexia RomanMădălina Claudia HapcaAnca Mădălina SereRoxana Mihaela MănoiuMaximilian Vlad MunteanSimona Delia Nicoară
Published in: Diagnostics (Basel, Switzerland) (2024)
Patient compliance in chronic illnesses is essential for disease management. This also applies to age-related macular degeneration (AMD), a chronic acquired retinal degeneration that needs constant monitoring and patient cooperation. Therefore, patients with AMD can benefit by being properly informed about their disease, regardless of the condition's stage. Information is essential in keeping them compliant with lifestyle changes, regular monitoring, and treatment. Large language models have shown potential in numerous fields, including medicine, with remarkable use cases. In this paper, we wanted to assess the capacity of two large language models (LLMs), ChatGPT4 and PaLM2, to offer advice to questions frequently asked by patients with AMD. After searching on AMD-patient-dedicated websites for frequently asked questions, we curated and selected a number of 143 questions. The questions were then transformed into scenarios that were answered by ChatGPT4, PaLM2, and three ophthalmologists. Afterwards, the answers provided by the two LLMs to a set of 133 questions were evaluated by two ophthalmologists, who graded each answer on a five-point Likert scale. The models were evaluated based on six qualitative criteria: (C1) reflects clinical and scientific consensus, (C2) likelihood of possible harm, (C3) evidence of correct reasoning, (C4) evidence of correct comprehension, (C5) evidence of correct retrieval, and (C6) missing content. Out of 133 questions, ChatGPT4 received a score of five from both reviewers to 118 questions (88.72%) for C1, to 130 (97.74%) for C2, to 131 (98.50%) for C3, to 133 (100%) for C4, to 132 (99.25%) for C5, and to 122 (91.73%) for C6, while PaLM2 to 81 questions (60.90%) for C1, to 114 (85.71%) for C2, to 115 (86.47%) for C3, to 124 (93.23%) for C4, to 113 (84.97%) for C5, and to 93 (69.92%) for C6. Despite the overall high performance, there were answers that are incomplete or inaccurate, and the paper explores the type of errors produced by these LLMs. Our study reveals that ChatGPT4 and PaLM2 are valuable instruments for patient information and education; however, since there are still some limitations to these models, for proper information, they should be used in addition to the advice provided by the physicians.
Keyphrases