1 code implementation • 25 Oct 2022 • Pratik Vaishnavi, Kevin Eykholt, Amir Rahmati
Training deep neural network classifiers that are certifiably robust against adversarial attacks is critical to ensuring the security and reliability of AI-controlled systems.
1 code implementation • 24 Oct 2022 • Farhan Ahmed, Pratik Vaishnavi, Kevin Eykholt, Amir Rahmati
Since the discovery of adversarial attacks against machine learning models nearly a decade ago, research on adversarial machine learning has rapidly evolved into an eternal war between defenders, who seek to increase the robustness of ML models against adversarial attacks, and adversaries, who seek to develop better attacks capable of weakening or defeating these defenses.
1 code implementation • 21 Feb 2022 • Pratik Vaishnavi, Kevin Eykholt, Amir Rahmati
On CIFAR-10, RRM trains a robust model $\sim 1. 8\times$ faster than the state-of-the-art.
no code implementations • 27 Nov 2019 • Pratik Vaishnavi, Tianji Cong, Kevin Eykholt, Atul Prakash, Amir Rahmati
Focusing on the observation that discrete pixelization in MNIST makes the background completely black and foreground completely white, we hypothesize that the important property for increasing robustness is the elimination of image background using attention masks before classifying an object.
no code implementations • 12 Sep 2019 • Pratik Vaishnavi, Kevin Eykholt, Atul Prakash, Amir Rahmati
Numerous techniques have been proposed to harden machine learning algorithms and mitigate the effect of adversarial attacks.
no code implementations • 26 May 2019 • Kevin Eykholt, Swati Gupta, Atul Prakash, Amir Rahmati, Pratik Vaishnavi, Haizhong Zheng
Existing deep neural networks, say for image classification, have been shown to be vulnerable to adversarial images that can cause a DNN misclassification, without any perceptible change to an image.