Login / Signup

Large Language Model (LLM)-Powered Chatbots Fail to Generate Guideline-Consistent Content on Resuscitation and May Provide Potentially Harmful Advice.

Alexei A BirkunAdhish Gautam
Published in: Prehospital and disaster medicine (2023)
The LLM-powered chatbots' advice on help to a non-breathing victim omits essential details of resuscitation technique and occasionally contains deceptive, potentially harmful directives. Further research and regulatory measures are required to mitigate risks related to the chatbot-generated misinformation of public on resuscitation.
Keyphrases
  • cardiac arrest
  • cardiopulmonary resuscitation
  • septic shock
  • social media
  • healthcare
  • mental health
  • transcription factor
  • human health