Endoscopic optical coherence tomography (OCT) possesses the capability to non-invasively image internal lumens; however, it is susceptible to saturation artifacts arising from robust reflective structures. In this study, we introduce an innovative deep learning network, ATN-Res2Unet, designed to mitigate saturation artifacts in endoscopic OCT images. This is achieved through the integration of multi-scale perception, multi-attention mechanisms, and frequency domain filters. To address the challenge of obtaining ground truth in endoscopic OCT, we propose a method for constructing training data pairs. Experimental in vivo data substantiates the effectiveness of ATN-Res2Unet in reducing diverse artifacts while preserving structural information. Comparative analysis with prior studies reveals a notable enhancement, with average quantitative indicators increasing by 45.4-83.8%. Significantly, this study marks the inaugural exploration of leveraging deep learning to eradicate artifacts from endoscopic OCT images, presenting considerable potential for clinical applications.
Keyphrases
- deep learning
- optical coherence tomography
- diabetic retinopathy
- ultrasound guided
- convolutional neural network
- artificial intelligence
- image quality
- machine learning
- optic nerve
- big data
- cone beam
- electronic health record
- systematic review
- high resolution
- working memory
- magnetic resonance imaging
- endoscopic submucosal dissection
- risk assessment
- social media
- small bowel
- case control