Login / Signup

A new generative approach for optical coherence tomography data scarcity: unpaired mutual conversion between scanning presets.

Mateo GendeJose Joaquim De Moura-RamosJorge NovoManuel G PenedoMarcos Ortega
Published in: Medical & biological engineering & computing (2023)
In optical coherence tomography (OCT), there is a trade-off between the scanning time and image quality, leading to a scarcity of high quality data. OCT platforms provide different scanning presets, producing visually distinct images, limiting their compatibility. In this work, a fully automatic methodology for the unpaired visual conversion of the two most prevalent scanning presets is proposed. Using contrastive unpaired translation generative adversarial architectures, low quality images acquired with the faster Macular Cube preset can be converted to the visual style of high visibility Seven Lines scans and vice-versa. This modifies the visual appearance of the OCT images generated by each preset while preserving natural tissue structure. The quality of original and synthetic generated images was compared using BRISQUE. The synthetic generated images achieved very similar scores to original images of their target preset. The generative models were validated in automatic and expert separability tests. These models demonstrated they were able to replicate the genuine look of the original images. This methodology has the potential to create multi-preset datasets with which to train robust computer-aided diagnosis systems by exposing them to the visual features of different presets they may encounter in real clinical scenarios without having to obtain additional data. Graphical Abstract Unpaired mutual conversion between scanning presets. Two generative adversarial models are trained for the conversion of OCT images into images of another scanning preset, replicating the visual features that characterise said preset.
Keyphrases