Predicting treatment response from longitudinal images using multi-task deep learning.
Cheng JinHeng YuJia KePei-Rong DingYongju YiXiaofeng JiangXin DuanJinghua TangDaniel T ChangXiaojian WuFeng GaoRuijiang LiPublished in: Nature communications (2021)
Radiographic imaging is routinely used to evaluate treatment response in solid tumors. Current imaging response metrics do not reliably predict the underlying biological response. Here, we present a multi-task deep learning approach that allows simultaneous tumor segmentation and response prediction. We design two Siamese subnetworks that are joined at multiple layers, which enables integration of multi-scale feature representations and in-depth comparison of pre-treatment and post-treatment images. The network is trained using 2568 magnetic resonance imaging scans of 321 rectal cancer patients for predicting pathologic complete response after neoadjuvant chemoradiotherapy. In multi-institution validation, the imaging-based model achieves AUC of 0.95 (95% confidence interval: 0.91-0.98) and 0.92 (0.87-0.96) in two independent cohorts of 160 and 141 patients, respectively. When combined with blood-based tumor markers, the integrated model further improves prediction accuracy with AUC 0.97 (0.93-0.99). Our approach to capturing dynamic information in longitudinal images may be broadly used for screening, treatment response evaluation, disease monitoring, and surveillance.
Keyphrases
- deep learning
- convolutional neural network
- artificial intelligence
- high resolution
- magnetic resonance imaging
- rectal cancer
- locally advanced
- end stage renal disease
- ejection fraction
- optical coherence tomography
- cross sectional
- squamous cell carcinoma
- chronic kidney disease
- public health
- peritoneal dialysis
- neoadjuvant chemotherapy
- contrast enhanced
- patient reported outcomes
- lymph node
- patient reported