Comparison of Prompt Engineering and Fine-Tuning Strategies in Large Language Models in the Classification of Clinical Notes.
Xiaodan ZhangNabasmita TalukdarSandeep VemulapalliSumyeong AhnJiankun WangHan MengSardar Mehtab Bin MurtazaDmitry LeshchinerAakash Ajay DaveDimitri F JosephMartin Witteveen-LaneDave CheslaJiayu ZhouBin ChenPublished in: AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science (2024)
The emerging large language models (LLMs) are actively evaluated in various fields including healthcare. Most studies have focused on established benchmarks and standard parameters; however, the variation and impact of prompt engineering and fine-tuning strategies have not been fully explored. This study benchmarks GPT-3.5 Turbo, GPT-4, and Llama-7B against BERT models and medical fellows' annotations in identifying patients with metastatic cancer from discharge summaries. Results revealed that clear, concise prompts incorporating reasoning steps significantly enhanced performance. GPT-4 exhibited superior performance among all models. Notably, one-shot learning and fine-tuning provided no incremental benefit. The model's accuracy sustained even when keywords for metastatic cancer were removed or when half of the input tokens were randomly discarded. These findings underscore GPT-4's potential to substitute specialized models, such as PubMedBERT, through strategic prompt engineering, and suggest opportunities to improve open-source models, which are better suited to use in clinical settings.