Login / Signup

Self-Supervised Molecular Pretraining Strategy for Low-Resource Reaction Prediction Scenarios.

Zhipeng WuXiang CaiChengyun ZhangHaoran QiaoYejian WuYun ZhangXinqiao WangHaiying XieFeng LuoHongliang Duan
Published in: Journal of chemical information and modeling (2022)
In the face of low-resource reaction training samples, we construct a chemical platform for addressing small-scale reaction prediction problems. Using a self-supervised pretraining strategy called MAsked Sequence to Sequence (MASS), the Transformer model can absorb the chemical information of about 1 billion molecules and then fine-tune on a small-scale reaction prediction. To further strengthen the predictive performance of our model, we combine MASS with the reaction transfer learning strategy. Here, we show that the average improved accuracies of the Transformer model can reach 14.07, 24.26, 40.31, and 57.69% in predicting the Baeyer-Villiger, Heck, C-C bond formation, and functional group interconversion reaction data sets, respectively, marking an important step to low-resource reaction prediction.
Keyphrases
  • electron transfer
  • machine learning
  • mental health
  • high throughput
  • air pollution
  • big data
  • single cell
  • health information