no code implementations • 14 Feb 2022 • Xupeng Shi, Pengfei Zheng, A. Adam Ding, Yuan Gao, Weizhong Zhang
Modern deep neural networks (DNNs) are vulnerable to adversarial attacks and adversarial training has been shown to be a promising method for improving the adversarial robustness of DNNs.
1 code implementation • NeurIPS 2020 • Jiaxing Wang, Haoli Bai, Jiaxiang Wu, Xupeng Shi, Junzhou Huang, Irwin King, Michael Lyu, Jian Cheng
Nevertheless, it is unclear how parameter sharing affects the searching process.
no code implementations • 27 Oct 2019 • Xupeng Shi, A. Adam Ding
We show that linear classifiers can be made robust to strong adversarial examples attack in cases where no adversarial robust linear classifiers exist under the previous definition.