no code implementations • 14 Feb 2022 • Xupeng Shi, Pengfei Zheng, A. Adam Ding, Yuan Gao, Weizhong Zhang
Modern deep neural networks (DNNs) are vulnerable to adversarial attacks and adversarial training has been shown to be a promising method for improving the adversarial robustness of DNNs.
no code implementations • 5 Feb 2022 • Guanhong Miao, A. Adam Ding, Samuel S. Wu
Secure collaborative learning is significantly difficult with the presence of malicious adversaries who may deviates from the secure protocol.
no code implementations • 27 Oct 2019 • Xupeng Shi, A. Adam Ding
We show that linear classifiers can be made robust to strong adversarial examples attack in cases where no adversarial robust linear classifiers exist under the previous definition.