Login / Signup

Universal adversarial attacks on deep neural networks for medical image classification.

Hokuto HiranoAkinori MinagiKazuhiro Takemoto
Published in: BMC medical imaging (2021)
Unlike previous assumptions, the results indicate that DNN-based clinical diagnosis is easier to deceive because of adversarial attacks. Adversaries can cause failed diagnoses at lower costs (e.g., without consideration of data distribution); moreover, they can affect the diagnosis. The effects of adversarial defenses may not be limited. Our findings emphasize that more careful consideration is required in developing DNNs for medical imaging and their practical applications.
Keyphrases
  • neural network
  • deep learning
  • healthcare
  • machine learning
  • high resolution
  • electronic health record
  • big data
  • artificial intelligence
  • photodynamic therapy