Unraveling the deep learning gearbox in optical coherence tomography image segmentation towards explainable artificial intelligence.
Peter M MalocaPhilipp L MüllerAaron Y LeeAdnan TufailKonstantinos BalaskasStephanie NiklausPascal KaiserSusanne SuterJavier Zarranz-VenturaCatherine EganHendrik P N SchollTobias K SchnitzerThomas SingerPascal W HaslerNora DenkPublished in: Communications biology (2021)
Machine learning has greatly facilitated the analysis of medical data, while the internal operations usually remain intransparent. To better comprehend these opaque procedures, a convolutional neural network for optical coherence tomography image segmentation was enhanced with a Traceable Relevance Explainability (T-REX) technique. The proposed application was based on three components: ground truth generation by multiple graders, calculation of Hamming distances among graders and the machine learning algorithm, as well as a smart data visualization ('neural recording'). An overall average variability of 1.75% between the human graders and the algorithm was found, slightly minor to 2.02% among human graders. The ambiguity in ground truth had noteworthy impact on machine learning results, which could be visualized. The convolutional neural network balanced between graders and allowed for modifiable predictions dependent on the compartment. Using the proposed T-REX setup, machine learning processes could be rendered more transparent and understandable, possibly leading to optimized applications.