Drug knowledge discovery via multi-task learning and pre-trained models.
Dongfang LiYing XiongBaotian HuBuzhou TangWeihua PengQingcai ChenPublished in: BMC medical informatics and decision making (2021)
Experimental results on the benchmark annotation of genes with active mutation-centric function changes corpus show that integrating pre-trained biomedical language representation models (i.e., BERT, NCBI BERT, ClinicalBERT, BioBERT) into a pipe of information extraction methods with multi-task learning can improve the ability to collect mutation-disease knowledge from PubMed.