Login / Signup

Prompt engineering to increase GPT3.5's performance on the Plastic Surgery In-Service Exams.

George R NahassSydney W ChinIsabel M ScharfSobhi KazmouzNicolas KaplanRichard ChiuKevin YangNaji Bou ZeidJulia CorcoranLee W T Alkureishi
Published in: Journal of plastic, reconstructive & aesthetic surgery : JPRAS (2024)
This study assesses ChatGPT's (GPT-3.5) performance on the 2021 ASPS Plastic Surgery In-Service Examination using prompt modifications and Retrieval Augmented Generation (RAG). ChatGPT was instructed to act as a "resident," "attending," or "medical student," and RAG utilized a curated vector database for context. Results showed no significant improvement, with the "resident" prompt yielding the highest accuracy at 54%, and RAG failing to enhance performance, with accuracy remaining at 54.3%. Despite appropriate reasoning when correct, ChatGPT's overall performance fell in the 10th percentile, indicating the need for fine-tuning and more sophisticated approaches to improve AI's utility in complex medical tasks.
Keyphrases
  • healthcare
  • mental health
  • patient safety
  • quality improvement
  • artificial intelligence
  • working memory
  • emergency department
  • machine learning
  • tertiary care
  • adverse drug