Login / Signup

ETLD: an encoder-transformation layer-decoder architecture for protein contact and mutation effects prediction.

He WangYongjian ZangYing KangJianwen ZhangLei ZhangShengli Zhang
Published in: Briefings in bioinformatics (2023)
The latent features extracted from the multiple sequence alignments (MSAs) of homologous protein families are useful for identifying residue-residue contacts, predicting mutation effects, shaping protein evolution, etc. Over the past three decades, a growing body of supervised and unsupervised machine learning methods have been applied to this field, yielding fruitful results. Here, we propose a novel self-supervised model, called encoder-transformation layer-decoder (ETLD) architecture, capable of capturing protein sequence latent features directly from MSAs. Compared to the typical autoencoder model, ETLD introduces a transformation layer with the ability to learn inter-site couplings, which can be used to parse out the two-dimensional residue-residue contacts map after a simple mathematical derivation or an additional supervised neural network. ETLD retains the process of encoding and decoding sequences, and the predicted probabilities of amino acids at each site can be further used to construct the mutation landscapes for mutation effects prediction, outperforming advanced models such as GEMME, DeepSequence and EVmutation in general. Overall, ETLD is a highly interpretable unsupervised model with great potential for improvement and can be further combined with supervised methods for more extensive and accurate predictions.
Keyphrases
  • machine learning
  • amino acid
  • artificial intelligence
  • protein protein
  • big data
  • binding protein
  • deep learning
  • dna damage
  • high resolution
  • small molecule
  • oxidative stress
  • mass spectrometry