1 code implementation • 13 Dec 2022 • Amin Ghiasi, Hamid Kazemi, Eitan Borgnia, Steven Reich, Manli Shu, Micah Goldblum, Andrew Gordon Wilson, Tom Goldstein
In addition, we show that ViTs maintain spatial information in all layers except the final layer.
no code implementations • NeurIPS 2023 • Amin Ghiasi, Ali Shafahi, Reza Ardekani
We propose adaptive weight decay, which automatically tunes the hyper-parameter for weight decay during each training iteration.
1 code implementation • 31 Jan 2022 • Amin Ghiasi, Hamid Kazemi, Steven Reich, Chen Zhu, Micah Goldblum, Tom Goldstein
Existing techniques for model inversion typically rely on hard-to-tune regularizers, such as total variation or feature regularization, which must be individually calibrated for each network in order to produce adequate images.
no code implementations • 29 Sep 2021 • Eitan Borgnia, Jonas Geiping, Valeriia Cherepanova, Liam H Fowl, Arjun Gupta, Amin Ghiasi, Furong Huang, Micah Goldblum, Tom Goldstein
Data poisoning and backdoor attacks manipulate training data to induce security breaches in a victim model.
1 code implementation • 2 Mar 2021 • Eitan Borgnia, Jonas Geiping, Valeriia Cherepanova, Liam Fowl, Arjun Gupta, Amin Ghiasi, Furong Huang, Micah Goldblum, Tom Goldstein
The InstaHide method has recently been proposed as an alternative to DP training that leverages supposed privacy properties of the mixup augmentation, although without rigorous guarantees.
1 code implementation • ICLR 2022 • Avi Schwarzschild, Arjun Gupta, Amin Ghiasi, Micah Goldblum, Tom Goldstein
It is widely believed that deep neural networks contain layer specialization, wherein neural networks extract hierarchical features representing edges and patterns in shallow layers and complete objects in deeper layers.
1 code implementation • 18 Nov 2020 • Eitan Borgnia, Valeriia Cherepanova, Liam Fowl, Amin Ghiasi, Jonas Geiping, Micah Goldblum, Tom Goldstein, Arjun Gupta
Data poisoning and backdoor attacks manipulate victim models by maliciously modifying training data.
no code implementations • 14 Oct 2020 • Chen Zhu, Zheng Xu, Ali Shafahi, Manli Shu, Amin Ghiasi, Tom Goldstein
Further, we demonstrate that the compact structure and corresponding initialization from the Lottery Ticket Hypothesis can also help in data-free training.
1 code implementation • ICLR 2020 • Amin Ghiasi, Ali Shafahi, Tom Goldstein
To deflect adversarial attacks, a range of "certified" classifiers have been proposed.
no code implementations • 25 Oct 2019 • Ali Shafahi, Amin Ghiasi, Furong Huang, Tom Goldstein
Adversarial training is one of the strongest defenses against adversarial attacks, but it requires adversarial examples to be generated for every mini-batch during optimization.
1 code implementation • ICLR 2020 • Ali Shafahi, Parsa Saadatpanah, Chen Zhu, Amin Ghiasi, Christoph Studer, David Jacobs, Tom Goldstein
By training classifiers on top of these feature extractors, we produce new models that inherit the robustness of their parent networks.
6 code implementations • NeurIPS 2019 • Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S. Davis, Gavin Taylor, Tom Goldstein
Adversarial training, in which a network is trained on adversarial examples, is one of the few defenses against adversarial attacks that withstands strong attacks.