Login / Signup

Can large language models predict antimicrobial peptide activity and toxicity?

Markus OrsiJean-Louis Reymond
Published in: RSC medicinal chemistry (2024)
Antimicrobial peptides (AMPs) are naturally occurring or designed peptides up to a few tens of amino acids which may help address the antimicrobial resistance crisis. However, their clinical development is limited by toxicity to human cells, a parameter which is very difficult to control. Given the similarity between peptide sequences and words, large language models (LLMs) might be able to predict AMP activity and toxicity. To test this hypothesis, we fine-tuned LLMs using data from the Database of Antimicrobial Activity and Structure of Peptides (DBAASP). GPT-3 performed well but not reproducibly for activity prediction and hemolysis, taken as a proxy for toxicity. The later GPT-3.5 performed more poorly and was surpassed by recurrent neural networks (RNN) trained on sequence-activity data or support vector machines (SVM) trained on MAP4C molecular fingerprint-activity data. These simpler models are therefore recommended, although the rapid evolution of LLMs warrants future re-evaluation of their prediction abilities.
Keyphrases
  • antimicrobial resistance
  • amino acid
  • oxidative stress
  • autism spectrum disorder
  • public health
  • emergency department
  • body composition
  • current status
  • data analysis
  • single molecule