Login / Signup

Stress testing reveals gaps in clinic readiness of image-based diagnostic artificial intelligence models.

Albert T YoungKristen FernandezJacob PfauRasika ReddyNhat Anh CaoMax Y von FranqueArjun JohalBenjamin V WuRachel R WuJennifer Y ChenRaj P FadaduJuan A VasquezAndrew TamMichael J KeiserMaria L Wei
Published in: NPJ digital medicine (2021)
Artificial intelligence models match or exceed dermatologists in melanoma image classification. Less is known about their robustness against real-world variations, and clinicians may incorrectly assume that a model with an acceptable area under the receiver operating characteristic curve or related performance metric is ready for clinical use. Here, we systematically assessed the performance of dermatologist-level convolutional neural networks (CNNs) on real-world non-curated images by applying computational "stress tests". Our goal was to create a proxy environment in which to comprehensively test the generalizability of off-the-shelf CNNs developed without training or evaluation protocols specific to individual clinics. We found inconsistent predictions on images captured repeatedly in the same setting or subjected to simple transformations (e.g., rotation). Such transformations resulted in false positive or negative predictions for 6.5-22% of skin lesions across test datasets. Our findings indicate that models meeting conventionally reported metrics need further validation with computational stress tests to assess clinic readiness.
Keyphrases
  • deep learning
  • artificial intelligence
  • convolutional neural network
  • big data
  • machine learning
  • primary care
  • stress induced
  • palliative care
  • heat stress
  • rna seq
  • wound healing