Login / Signup

Single-sequence protein structure prediction using a language model and deep learning.

Ratul ChowdhuryNazim BouattaSurojit BiswasChristina FloristeanAnant KharkarKoushik RoyCharlotte RochereauGustaf AhdritzJoanna ZhangGeorge M ChurchPeter Karl SorgerMohammed AlQuraishi
Published in: Nature biotechnology (2022)
AlphaFold2 and related computational systems predict protein structure using deep learning and co-evolutionary relationships encoded in multiple sequence alignments (MSAs). Despite high prediction accuracy achieved by these systems, challenges remain in (1) prediction of orphan and rapidly evolving proteins for which an MSA cannot be generated; (2) rapid exploration of designed structures; and (3) understanding the rules governing spontaneous polypeptide folding in solution. Here we report development of an end-to-end differentiable recurrent geometric network (RGN) that uses a protein language model (AminoBERT) to learn latent structural information from unaligned proteins. A linked geometric module compactly represents C α backbone geometry in a translationally and rotationally invariant way. On average, RGN2 outperforms AlphaFold2 and RoseTTAFold on orphan proteins and classes of designed proteins while achieving up to a 10 6 -fold reduction in compute time. These findings demonstrate the practical and theoretical strengths of protein language models relative to MSAs in structure prediction.
Keyphrases
  • deep learning
  • amino acid
  • protein protein
  • autism spectrum disorder
  • binding protein
  • healthcare
  • high resolution
  • machine learning
  • dna methylation
  • social media
  • quantum dots