Login / Signup

Humans can decipher adversarial images.

Zhenglong ZhouChaz Firestone
Published in: Nature communications (2019)
Does the human mind resemble the machine-learning systems that mirror its performance? Convolutional neural networks (CNNs) have achieved human-level benchmarks in classifying novel images. These advances support technologies such as autonomous vehicles and machine diagnosis; but beyond this, they serve as candidate models for human vision itself. However, unlike humans, CNNs are "fooled" by adversarial examples-nonsense patterns that machines recognize as familiar objects, or seemingly irrelevant image perturbations that nevertheless alter the machine's classification. Such bizarre behaviors challenge the promise of these new advances; but do human and machine judgments fundamentally diverge? Here, we show that human and machine classification of adversarial images are robustly related: In 8 experiments on 5 prominent and diverse adversarial imagesets, human subjects correctly anticipated the machine's preferred label over relevant foils-even for images described as "totally unrecognizable to human eyes". Human intuition may be a surprisingly reliable guide to machine (mis)classification-with consequences for minds and machines alike.
Keyphrases
  • deep learning
  • endothelial cells
  • machine learning
  • convolutional neural network
  • pluripotent stem cells
  • optical coherence tomography
  • artificial intelligence
  • big data