no code implementations • 29 Jun 2023 • He Wang, Yunfeng Diao
To this end, we propose a new black-box defense framework.
1 code implementation • 16 May 2023 • Wan Jiang, Yunfeng Diao, He Wang, Jianxin Sun, Meng Wang, Richang Hong
Unfortunately, we find UEs provide a false sense of security, because they cannot stop unauthorized users from utilizing other unprotected data to remove the protection, by turning unlearnable data into learnable again.
4 code implementations • 21 Nov 2022 • Yunfeng Diao, He Wang, Tianjia Shao, Yong-Liang Yang, Kun Zhou, David Hogg, Meng Wang
Via BASAR, we find on-manifold adversarial samples are extremely deceitful and rather common in skeletal motions, in contrast to the common belief that adversarial samples only exist off-manifold.
2 code implementations • 9 Mar 2022 • He Wang, Yunfeng Diao, Zichang Tan, Guodong Guo
Our method is featured by full Bayesian treatments of the clean data, the adversaries and the classifier, leading to (1) a new Bayesian Energy-based formulation of robust discriminative classifiers, (2) a new adversary sampling scheme based on natural motion manifolds, and (3) a new post-train Bayesian strategy for black-box defense.
1 code implementation • CVPR 2021 • Yunfeng Diao, Tianjia Shao, Yong-Liang Yang, Kun Zhou, He Wang
The robustness of skeleton-based activity recognizers has been questioned recently, which shows that they are vulnerable to adversarial attacks when the full-knowledge of the recognizer is accessible to the attacker.