Login / Signup

Interpretably deep learning amyloid nucleation by massive experimental quantification of random sequences.

Mike ThompsonMariano MartínTrinidad Sanmartín OlmoChandana RajeshPeter K KooBenedetta BolognesiBen Lehner
Published in: bioRxiv : the preprint server for biology (2024)
Insoluble amyloid aggregates are the hallmarks of more than fifty human diseases, including the most common neurodegenerative disorders. The process by which soluble proteins nucleate to form amyloid fibrils is, however, quite poorly characterized. Relatively few sequences are known that form amyloids with high propensity and this data shortage likely limits our capacity to understand, predict, engineer, and prevent the formation of amyloid fibrils. Here we quantify the nucleation of amyloids at an unprecedented scale and use the data to train a deep learning model of amyloid nucleation. In total, we quantify the nucleation rates of >100,000 20-amino-acid-long peptides. This large and diverse dataset allows us to train CANYA, a convolution-attention hybrid neural network. CANYA is fast and outperforms existing methods with stable performance across diverse prediction tasks. Interpretability analyses reveal CANYA's decision-making process and learned grammar, providing mechanistic insights into amyloid nucleation. Our results illustrate the power of massive experimental analysis of random sequence-spaces and provide an interpretable and robust neural network model to predict amyloid nucleation.
Keyphrases
  • neural network
  • deep learning
  • amino acid
  • working memory
  • endothelial cells
  • electronic health record
  • gene expression
  • artificial intelligence
  • genome wide
  • high resolution
  • data analysis