Login / Signup

Parameter-efficient fine-tuning on large protein language models improves signal peptide prediction.

Shuai ZengDuolin WangLei JiangDong Xu
Published in: Genome research (2024)
Signal peptides (SP) play a crucial role in protein translocation in cells. The development of large protein language models (PLMs) and prompt-based learning provides a new opportunity for SP prediction, especially for the categories with limited annotated data. We present a parameter-efficient fine-tuning (PEFT) framework for SP prediction, PEFT-SP, to effectively utilize pretrained PLMs. We integrated low-rank adaptation (LoRA) into ESM-2 models to better leverage the protein sequence evolutionary knowledge of PLMs. Experiments show that PEFT-SP using LoRA enhances state-of-the-art results, leading to a maximum Matthews correlation coefficient (MCC) gain of 87.3% for SPs with small training samples and an overall MCC gain of 6.1%. Furthermore, we also employed two other PEFT methods, prompt tuning and adapter tuning, in ESM-2 for SP prediction. More elaborate experiments show that PEFT-SP using adapter tuning can also improve the state-of-the-art results by up to 28.1% MCC gain for SPs with small training samples and an overall MCC gain of 3.8%. LoRA requires fewer computing resources and less memory than the adapter during the training stage, making it possible to adapt larger and more powerful protein models for SP prediction.
Keyphrases
  • amino acid
  • protein protein
  • healthcare
  • binding protein
  • autism spectrum disorder
  • machine learning
  • magnetic resonance imaging
  • cell proliferation
  • cell death
  • virtual reality
  • genome wide