Login / Signup

Enhancing Interpretability in Medical Image Classification by Integrating Formal Concept Analysis with Convolutional Neural Networks.

Minal KhatriYanbin YinJitender Deogun
Published in: Biomimetics (Basel, Switzerland) (2024)
In this study, we present a novel approach to enhancing the interpretability of medical image classification by integrating formal concept analysis (FCA) with convolutional neural networks (CNNs). While CNNs are increasingly applied in medical diagnoses, understanding their decision-making remains a challenge. Although visualization techniques like saliency maps offer insights into CNNs' decision-making for individual images, they do not explicitly establish a relationship between the high-level features learned by CNNs and the class labels across entire dataset. To bridge this gap, we leverage the FCA framework as an image classification model, presenting a novel method for understanding the relationship between abstract features and class labels in medical imaging. Building on our previous work, which applied this method to the MNIST handwritten image dataset and demonstrated that the performance is comparable to CNNs, we extend our approach and evaluation to histopathological image datasets, including Warwick-QU and BreakHIS. Our results show that the FCA-based classifier offers comparable accuracy to deep neural classifiers while providing transparency into the classification process, an important factor in clinical decision-making.
Keyphrases
  • deep learning
  • convolutional neural network
  • decision making
  • healthcare
  • machine learning
  • case report
  • photodynamic therapy
  • rna seq
  • single cell