Login / Signup

Deep Reinforcement Learning with Automated Label Extraction from Clinical Reports Accurately Classifies 3D MRI Brain Volumes.

Joseph Nathaniel StemberHrithwik Shalu
Published in: Journal of digital imaging (2022)
Image classification is probably the most fundamental task in radiology artificial intelligence. To reduce the burden of acquiring and labeling data sets, we employed a two-pronged strategy. We automatically extracted labels from radiology reports in Part 1. In Part 2, we used the labels to train a data-efficient reinforcement learning (RL) classifier. We applied the approach to a small set of patient images and radiology reports from our institution. For Part 1, we trained sentence-BERT (SBERT) on 90 radiology reports. In Part 2, we used the labels from the trained SBERT to train an RL-based classifier. We trained the classifier on a training set of [Formula: see text] images. We tested on a separate collection of [Formula: see text] images. For comparison, we also trained and tested a supervised deep learning (SDL) classification network on the same set of training and testing images using the same labels. Part 1: The trained SBERT model improved from 82 to [Formula: see text] accuracy. Part 2: Using Part 1's computed labels, SDL quickly overfitted the small training set. Whereas SDL showed the worst possible testing set accuracy of 50%, RL achieved [Formula: see text] testing set accuracy, with a [Formula: see text]-value of [Formula: see text]. We have shown the proof-of-principle application of automated label extraction from radiological reports. Additionally, we have built on prior work applying RL to classification using these labels, extending from 2D slices to entire 3D image volumes. RL has again demonstrated a remarkable ability to train effectively, in a generalized manner, and based on small training sets.
Keyphrases