Login / Signup

Current safeguards, risk mitigation, and transparency measures of large language models against the generation of health disinformation: repeated cross sectional analysis.

Bradley D MenzNicole M KudererStephen BacchiNatansh D ModiBenjamin Chin-YeeTiancheng HuCeara RickardMark HaseloffAgnes VitryRoss A McKinnonGanessan KichenadasseAndrew RowlandMichael J SorichAshley M Hopkins
Published in: BMJ (Clinical research ed.) (2024)
This study found that although effective safeguards are feasible to prevent LLMs from being misused to generate health disinformation, they were inconsistently implemented. Furthermore, effective processes for reporting safeguard problems were lacking. Enhanced regulation, transparency, and routine auditing are required to help prevent LLMs from contributing to the mass generation of health disinformation.
Keyphrases
  • mental health
  • public health
  • healthcare
  • cross sectional
  • health information
  • health promotion
  • climate change
  • autism spectrum disorder
  • emergency department
  • clinical practice
  • social media
  • adverse drug