no code implementations • 2 Jan 2024 • Xixu Hu, Runkai Zheng, Jindong Wang, Cheuk Hang Leung, Qi Wu, Xing Xie
In this study, we address this gap by introducing SpecFormer, specifically designed to enhance ViTs' resilience against adversarial attacks, with support from carefully derived theoretical guarantees.
no code implementations • 4 Aug 2023 • Juncheng Wang, Jindong Wang, Xixu Hu, Shujun Wang, Xing Xie
Empirical risk minimization (ERM) is a fundamental machine learning paradigm.
1 code implementation • ICCV 2023 • Kaijie Zhu, Jindong Wang, Xixu Hu, Xing Xie, Ge Yang
The core idea of RiFT is to exploit the redundant capacity for robustness by fine-tuning the adversarially trained model on its non-robust-critical module.
1 code implementation • 31 May 2023 • Shumin Ma, Zhiri Yuan, Qi Wu, Yiyan Huang, Xixu Hu, Cheuk Hang Leung, Dongdong Wang, Zhixiang Huang
This paper proposes a new domain adaptation approach in which one can measure the differences in the internal dependence structure separately from those in the marginals.
1 code implementation • 27 Feb 2023 • Wang Lu, Xixu Hu, Jindong Wang, Xing Xie
Concretely, we design an attention-based adapter for the large model, CLIP, and the rest operations merely depend on adapters.
1 code implementation • 22 Feb 2023 • Jindong Wang, Xixu Hu, Wenxin Hou, Hao Chen, Runkai Zheng, Yidong Wang, Linyi Yang, Haojun Huang, Wei Ye, Xiubo Geng, Binxin Jiao, Yue Zhang, Xing Xie
In this paper, we conduct a thorough evaluation of the robustness of ChatGPT from the adversarial and out-of-distribution (OOD) perspective.