Distributionally Adversarial Attack

16 Aug 2018  ·  Tianhang Zheng, Changyou Chen, Kui Ren ·

Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks. It is worth noting that the original objective of an attack/defense model relies on a data distribution $p(\mathbf{x})$, typically in the form of risk maximization/minimization, e.g., $\max/\min\mathbb{E}_{p(\mathbf(x))}\mathcal{L}(\mathbf{x})$ with $p(\mathbf{x})$ some unknown data distribution and $\mathcal{L}(\cdot)$ a loss function. However, since PGD generates attack samples independently for each data sample based on $\mathcal{L}(\cdot)$, the procedure does not necessarily lead to good generalization in terms of risk optimization. In this paper, we achieve the goal by proposing distributionally adversarial attack (DAA), a framework to solve an optimal {\em adversarial-data distribution}, a perturbed distribution that satisfies the $L_\infty$ constraint but deviates from the original data distribution to increase the generalization risk maximally. Algorithmically, DAA performs optimization on the space of potential data distributions, which introduces direct dependency between all data points when generating adversarial samples. DAA is evaluated by attacking state-of-the-art defense models, including the adversarially-trained models provided by {\em MIT MadryLab}. Notably, DAA ranks {\em the first place} on MadryLab's white-box leaderboards, reducing the accuracy of their secret MNIST model to $88.79\%$ (with $l_\infty$ perturbations of $\epsilon = 0.3$) and the accuracy of their secret CIFAR model to $44.71\%$ (with $l_\infty$ perturbations of $\epsilon = 8.0$). Code for the experiments is released on \url{https://github.com/tianzheng4/Distributionally-Adversarial-Attack}.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here