Semisupervised Training of a Brain MRI Tumor Detection Model Using Mined Annotations.
Nathaniel C SwinburneVivek YadavJulie KimYe R ChoiDavid C GutmanJonathan T YangNelson S MossJacqueline StoneJamie TisnadoVaios HatzoglouSofia S HaqueSasan KarimiJohn LyoKrishna JuluruKarl PichottaJianjiong GaoSohrab P ShahAndrei I HolodnyRobert J Youngnull nullPublished in: Radiology (2022)
Background Artificial intelligence (AI) applications for cancer imaging conceptually begin with automated tumor detection, which can provide the foundation for downstream AI tasks. However, supervised training requires many image annotations, and performing dedicated post hoc image labeling is burdensome and costly. Purpose To investigate whether clinically generated image annotations can be data mined from the picture archiving and communication system (PACS), automatically curated, and used for semisupervised training of a brain MRI tumor detection model. Materials and Methods In this retrospective study, the cancer center PACS was mined for brain MRI scans acquired between January 2012 and December 2017 and included all annotated axial T1 postcontrast images. Line annotations were converted to boxes, excluding boxes shorter than 1 cm or longer than 7 cm. The resulting boxes were used for supervised training of object detection models using RetinaNet and Mask region-based convolutional neural network (R-CNN) architectures. The best-performing model trained from the mined data set was used to detect unannotated tumors on training images themselves (self-labeling), automatically correcting many of the missing labels. After self-labeling, new models were trained using this expanded data set. Models were scored for precision, recall, and F 1 using a held-out test data set comprising 754 manually labeled images from 100 patients (403 intra-axial and 56 extra-axial enhancing tumors). Model F 1 scores were compared using bootstrap resampling. Results The PACS query extracted 31 150 line annotations, yielding 11 880 boxes that met inclusion criteria. This mined data set was used to train models, yielding F 1 scores of 0.886 for RetinaNet and 0.908 for Mask R-CNN. Self-labeling added 18 562 training boxes, improving model F 1 scores to 0.935 ( P < .001) and 0.954 ( P < .001), respectively. Conclusion The application of semisupervised learning to mined image annotations significantly improved tumor detection performance, achieving an excellent F 1 score of 0.954. This development pipeline can be extended for other imaging modalities, repurposing unused data silos to potentially enable automated tumor detection across radiologic modalities. © RSNA, 2022 Online supplemental material is available for this article.
Keyphrases
- deep learning
- artificial intelligence
- convolutional neural network
- big data
- machine learning
- loop mediated isothermal amplification
- electronic health record
- real time pcr
- magnetic resonance imaging
- virtual reality
- contrast enhanced
- high resolution
- computed tomography
- end stage renal disease
- optical coherence tomography
- squamous cell carcinoma
- magnetic resonance
- ejection fraction
- chronic kidney disease
- healthcare
- working memory
- high throughput
- newly diagnosed
- papillary thyroid
- sensitive detection
- multiple sclerosis
- obstructive sleep apnea
- social media
- diffusion weighted imaging
- pet ct
- dual energy