Login / Signup

ELAA: An Ensemble-Learning-Based Adversarial Attack Targeting Image-Classification Model.

Zhongwang FuXiaohui Cui
Published in: Entropy (Basel, Switzerland) (2023)
The research on image-classification-adversarial attacks is crucial in the realm of artificial intelligence (AI) security. Most of the image-classification-adversarial attack methods are for white-box settings, demanding target model gradients and network architectures, which is less practical when facing real-world cases. However, black-box adversarial attacks immune to the above limitations and reinforcement learning (RL) seem to be a feasible solution to explore an optimized evasion policy. Unfortunately, existing RL-based works perform worse than expected in the attack success rate. In light of these challenges, we propose an ensemble-learning-based adversarial attack (ELAA) targeting image-classification models which aggregate and optimize multiple reinforcement learning (RL) base learners, which further reveals the vulnerabilities of learning-based image-classification models. Experimental results show that the attack success rate for the ensemble model is about 35% higher than for a single model. The attack success rate of ELAA is 15% higher than those of the baseline methods.
Keyphrases
  • deep learning
  • artificial intelligence
  • convolutional neural network
  • machine learning
  • big data
  • healthcare
  • transcription factor
  • public health
  • binding protein
  • global health
  • neural network