Are L2 adversarial examples intrinsically different?

28 Feb 2020  ·  Mingxuan Li, Jingyuan Wang, Yufan Wu ·

Deep Neural Network (DDN) has achieved notable success in various tasks, including many security concerning scenarios. However, a considerable amount of work has proved its vulnerability to adversaries. We unravel the properties that can intrinsically differentiate adversarial examples and normal inputs through theoretical analysis. That is, adversarial examples generated by $L_2$ attacks usually have larger input sensitivity which can be used to identify them efficiently. We also found that those generated by $L_\infty$ attacks will be different enough in the pixel domain to be detected empirically. To verify our analysis, we proposed a \textbf{G}uided \textbf{C}omplementary \textbf{D}efense module (\textbf{GCD}) integrating detection and recovery processes. When compared with adversarial detection methods, our detector achieves a detection AUC of over 0.98 against most of the attacks. When comparing our guided rectifier with commonly used adversarial training methods and other rectification methods, our rectifier outperforms them by a large margin. We achieve a recovered classification accuracy of up to 99\% on MNIST, 89\% on CIFAR-10, and 87\% on ImageNet subsets against $L_2$ attacks. Furthermore, under the white-box setting, our holistic defensive module shows a promising degree of robustness. Thus, we confirm that at least $L_2$ adversarial examples are intrinsically different enough from normal inputs both theoretically and empirically. And we shed light upon designing simple yet effective defensive methods with these properties.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here