no code implementations • 23 Feb 2024 • Jieren Deng, Aaron Palmer, Rigel Mahmood, Ethan Rathbun, Jinbo Bi, Kaleel Mahmood, Derek Aguiar
Achieving resiliency against adversarial attacks is necessary prior to deploying neural network classifiers in domains where misclassification incurs substantial costs, e. g., self-driving cars or medical imaging.
1 code implementation • 26 Nov 2022 • Ethan Rathbun, Kaleel Mahmood, Sohaib Ahmad, Caiwen Ding, Marten van Dijk
First, how can the low transferability between defenses be utilized in a game theoretic framework to improve the robustness?
no code implementations • 7 Sep 2022 • Nuo Xu, Kaleel Mahmood, Haowen Fang, Ethan Rathbun, Caiwen Ding, Wujie Wen
First, we show that successful white-box adversarial attacks on SNNs are highly dependent on the underlying surrogate gradient technique, even in the case of adversarially trained SNNs.
no code implementations • 29 Sep 2021 • Kaleel Mahmood, Rigel Mahmood, Ethan Rathbun, Marten van Dijk
In this paper, we seek to help alleviate this problem by systematizing the recent advances in adversarial machine learning black-box attacks since 2019.