Login / Signup

ChatGPT for Tinnitus Information and Support: Response Accuracy and Retest after Three and Six Months.

Wiesław Wiktor JędrzejczakPiotr Henryk SkarzynskiDanuta Raj-KoziakMilaine Dominicini SanfinsStavros HatzopoulosKrzysztof Kochanek
Published in: Brain sciences (2024)
Testing of ChatGPT has recently been performed over a diverse range of topics. However, most of these assessments have been based on broad domains of knowledge. Here, we test ChatGPT's knowledge of tinnitus, an important but specialized aspect of audiology and otolaryngology. Testing involved evaluating ChatGPT's answers to a defined set of 10 questions on tinnitus. Furthermore, given the technology is advancing quickly, we re-evaluated the responses to the same 10 questions 3 and 6 months later. The accuracy of the responses was rated by 6 experts (the authors) using a Likert scale ranging from 1 to 5. Most of ChatGPT's responses were rated as satisfactory or better. However, we did detect a few instances where the responses were not accurate and might be considered somewhat misleading. Over the first 3 months, the ratings generally improved, but there was no more significant improvement at 6 months. In our judgment, ChatGPT provided unexpectedly good responses, given that the questions were quite specific. Although no potentially harmful errors were identified, some mistakes could be seen as somewhat misleading. ChatGPT shows great potential if further developed by experts in specific areas, but for now, it is not yet ready for serious application.
Keyphrases
  • healthcare
  • palliative care
  • high resolution
  • hearing loss
  • risk assessment
  • human health