Inter-fractional portability of deep learning models for lung target tracking on cine imaging acquired in MRI-guided radiotherapy.
Jiayuan PengHayley B StowePamela P SamsonClifford G RobinsonCui YangWeigang HuZhen ZhangTaeho KimGeoffrey D HugoThomas R MazurBin CaiPublished in: Physical and engineering sciences in medicine (2024)
MRI-guided radiotherapy systems enable beam gating by tracking the target on planar, two-dimensional cine images acquired during treatment. This study aims to evaluate how deep-learning (DL) models for target tracking that are trained on data from one fraction can be translated to subsequent fractions. Cine images were acquired for six patients treated on an MRI-guided radiotherapy platform (MRIdian, Viewray Inc.) with an onboard 0.35 T MRI scanner. Three DL models (U-net, attention U-net and nested U-net) for target tracking were trained using two training strategies: (1) uniform training using data obtained only from the first fraction with testing performed on data from subsequent fractions and (2) adaptive training in which training was updated each fraction by adding 20 samples from the current fraction with testing performed on the remaining images from that fraction. Tracking performance was compared between algorithms, models and training strategies by evaluating the Dice similarity coefficient (DSC) and 95% Hausdorff Distance (HD95) between automatically generated and manually specified contours. The mean DSC for all six patients in comparing manual contours and contours generated by the onboard algorithm (OBT) were 0.68 ± 0.16. Compared to OBT, the DSC values improved 17.0 - 19.3% for the three DL models with uniform training, and 24.7 - 25.7% for the models based on adaptive training. The HD95 values improved 50.6 - 54.5% for the models based on adaptive training. DL-based techniques achieved better tracking performance than the onboard, registration-based tracking approach. DL-based tracking performance improved when implementing an adaptive strategy that augments training data fraction-by-fraction.
Keyphrases
- deep learning
- virtual reality
- magnetic resonance imaging
- early stage
- electronic health record
- convolutional neural network
- contrast enhanced
- big data
- radiation therapy
- diffusion weighted imaging
- artificial intelligence
- squamous cell carcinoma
- radiation induced
- high resolution
- ejection fraction
- optical coherence tomography
- photodynamic therapy
- chronic kidney disease
- peritoneal dialysis
- neural network