Login / Signup

Adversarial attack on BC classification for scale-free networks.

Qi XuanYalu ShanJinhuan WangZhong-Yuan RuanGuanrong Chen
Published in: Chaos (Woodbury, N.Y.) (2020)
Adversarial attacks have been alerting the artificial intelligence community recently since many machine learning algorithms were found vulnerable to malicious attacks. This paper studies adversarial attacks on Broido and Clauset classification for scale-free networks to test its robustness in terms of statistical measures. In addition to the well-known random link rewiring (RLR) attack, two heuristic attacks are formulated and simulated: degree-addition-based link rewiring (DALR) and degree-interval-based link rewiring (DILR). These three strategies are applied to attack a number of strong scale-free networks of various sizes generated from the Barabási-Albert model and the uncorrelated configuration model. It is found that both DALR and DILR are more effective than RLR in the sense that rewiring a smaller number of links can succeed in the same attack. However, DILR is as concealed as RLR in the sense that they both are introducing a relatively small change on several typical structural properties, such as the average shortest path-length, the average clustering coefficient, the average diagonal distance, and the Kolmogorov-Smirnov test of the degree distribution. The results of this paper suggest that to classify a network to be scale-free, one has to be very careful from the viewpoint of adversarial attack effects.
Keyphrases
  • machine learning
  • artificial intelligence
  • deep learning
  • big data
  • healthcare
  • mental health
  • computed tomography
  • magnetic resonance imaging
  • diffusion weighted imaging
  • room temperature
  • rna seq