Login / Signup

A survey on the interpretability of deep learning in medical diagnosis.

Qiaoying TengZhe LiuYuqing SongKai HanYang Lu
Published in: Multimedia systems (2022)
Deep learning has demonstrated remarkable performance in the medical domain, with accuracy that rivals or even exceeds that of human experts. However, it has a significant problem that these models are "black-box" structures, which means they are opaque, non-intuitive, and difficult for people to understand. This creates a barrier to the application of deep learning models in clinical practice due to lack of interpretability, trust, and transparency. To overcome this problem, several studies on interpretability have been proposed. Therefore, in this paper, we comprehensively review the interpretability of deep learning in medical diagnosis based on the current literature, including some common interpretability methods used in the medical domain, various applications with interpretability for disease diagnosis, prevalent evaluation metrics, and several disease datasets. In addition, the challenges of interpretability and future research directions are also discussed here. To the best of our knowledge, this is the first time that various applications of interpretability methods for disease diagnosis have been summarized.
Keyphrases
  • deep learning
  • healthcare
  • artificial intelligence
  • convolutional neural network
  • clinical practice
  • machine learning
  • systematic review
  • high resolution
  • transcription factor
  • mass spectrometry
  • current status