Medical image fusion plays an important role in clinical applications such as image-guided surgery, image-guided radiotherapy, noninvasive diagnosis, and treatment planning. In this paper, we propose a novel multi-modal medical image fusion method based on simplified pulse-coupled neural network and quaternion wavelet transform. The proposed fusion algorithm is capable of combining not only pairs of computed tomography (CT) and magnetic resonance (MR) images, but also pairs of CT and proton-density-weighted MR images, and multi-spectral MR images such as T1 and T2. Experiments on six pairs of multi-modal medical images are conducted to compare the proposed scheme with four existing methods. The performances of various methods are investigated using mutual information metrics and comprehensive fusion performance characterization (total fusion performance, fusion loss, and modified fusion artifacts criteria). The experimental results show that the proposed algorithm not only extracts more important visual information from source images, but also effectively avoids introducing artificial information into fused medical images. It significantly outperforms existing medical image fusion methods in terms of subjective performance and objective evaluation metrics.
Keyphrases
- deep learning
- convolutional neural network
- magnetic resonance
- contrast enhanced
- optical coherence tomography
- computed tomography
- healthcare
- neural network
- magnetic resonance imaging
- image quality
- machine learning
- dual energy
- early stage
- minimally invasive
- squamous cell carcinoma
- blood pressure
- health information
- radiation therapy
- atrial fibrillation
- high resolution
- radiation induced
- acute coronary syndrome