Classification of Computed Tomography Images in Different Slice Positions Using Deep Learning.
Hiroyuki SugimoriPublished in: Journal of healthcare engineering (2018)
This study aimed at elucidating the relationship between the number of computed tomography (CT) images, including data concerning the accuracy of models and contrast enhancement for classifying the images. We enrolled 1539 patients who underwent contrast or noncontrast CT imaging, followed by dividing the CT imaging dataset for creating classification models into 10 classes for brain, neck, chest, abdomen, and pelvis with contrast-enhanced and plain imaging. The number of images prepared in each class were 100, 500, 1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, and 10,000. Accordingly, the names of datasets were defined as 0.1K, 0.5K, 1K, 2K, 3K, 4K, 5K, 6K, 7K, 8K, 9K, and 10K, respectively. We subsequently created and evaluated the models and compared the convolutional neural network (CNN) architecture between AlexNet and GoogLeNet. The time required for training models of AlexNet was lesser than that for GoogLeNet. The best overall accuracy for the classification of 10 classes was 0.721 with the 10K dataset of GoogLeNet. Furthermore, the best overall accuracy for the classification of the slice position without contrast media was 0.862 with the 2K dataset of AlexNet.
Keyphrases
- deep learning
- contrast enhanced
- convolutional neural network
- computed tomography
- dual energy
- magnetic resonance imaging
- diffusion weighted
- magnetic resonance
- image quality
- artificial intelligence
- high resolution
- diffusion weighted imaging
- positron emission tomography
- machine learning
- end stage renal disease
- ejection fraction
- chronic kidney disease
- prognostic factors
- newly diagnosed
- fluorescence imaging
- mass spectrometry
- brain injury
- pet ct
- patient reported