A Deep Learning Framework for Segmenting Brain Tumors Using MRI and Synthetically Generated CT Images.
Kh Tohidul IslamSudanthi WijewickremaStephen J O'LearyPublished in: Sensors (Basel, Switzerland) (2022)
Multi-modal three-dimensional (3-D) image segmentation is used in many medical applications, such as disease diagnosis, treatment planning, and image-guided surgery. Although multi-modal images provide information that no single image modality alone can provide, integrating such information to be used in segmentation is a challenging task. Numerous methods have been introduced to solve the problem of multi-modal medical image segmentation in recent years. In this paper, we propose a solution for the task of brain tumor segmentation. To this end, we first introduce a method of enhancing an existing magnetic resonance imaging (MRI) dataset by generating synthetic computed tomography (CT) images. Then, we discuss a process of systematic optimization of a convolutional neural network (CNN) architecture that uses this enhanced dataset, in order to customize it for our task. Using publicly available datasets, we show that the proposed method outperforms similar existing methods.
Keyphrases
- deep learning
- convolutional neural network
- contrast enhanced
- magnetic resonance imaging
- computed tomography
- artificial intelligence
- dual energy
- image quality
- healthcare
- diffusion weighted imaging
- positron emission tomography
- magnetic resonance
- machine learning
- minimally invasive
- health information
- coronary artery bypass
- acute coronary syndrome
- single cell
- surgical site infection
- atrial fibrillation