Variational autoencoders learn transferrable representations of metabolomics data.
Daniel P GomariAnnalise SchweickartLeandro CerchiettiElisabeth PaiettaHugo FernandezHassen Al-AminKarsten SuhreJan KrumsiekPublished in: Communications biology (2022)
Dimensionality reduction approaches are commonly used for the deconvolution of high-dimensional metabolomics datasets into underlying core metabolic processes. However, current state-of-the-art methods are widely incapable of detecting nonlinearities in metabolomics data. Variational Autoencoders (VAEs) are a deep learning method designed to learn nonlinear latent representations which generalize to unseen data. Here, we trained a VAE on a large-scale metabolomics population cohort of human blood samples consisting of over 4500 individuals. We analyzed the pathway composition of the latent space using a global feature importance score, which demonstrated that latent dimensions represent distinct cellular processes. To demonstrate model generalizability, we generated latent representations of unseen metabolomics datasets on type 2 diabetes, acute myeloid leukemia, and schizophrenia and found significant correlations with clinical patient groups. Notably, the VAE representations showed stronger effects than latent dimensions derived by linear and non-linear principal component analysis. Taken together, we demonstrate that the VAE is a powerful method that learns biologically meaningful, nonlinear, and transferrable latent representations of metabolomics data.
Keyphrases
- mass spectrometry
- working memory
- electronic health record
- type diabetes
- deep learning
- acute myeloid leukemia
- big data
- machine learning
- endothelial cells
- metabolic syndrome
- cardiovascular disease
- adipose tissue
- artificial intelligence
- skeletal muscle
- case report
- weight loss
- acute lymphoblastic leukemia
- convolutional neural network
- resistance training
- neural network
- single cell