no code implementations • 11 Apr 2023 • Tony Ma, Songze Li, Yisong Xiao, Shunchang Liu
The transferability of adversarial examples is a crucial aspect of evaluating the robustness of deep learning systems, particularly in black-box scenarios.
no code implementations • 8 Apr 2023 • Yisong Xiao, Tianyuan Zhang, Shunchang Liu, Haotong Qin
To address this gap, we thoroughly evaluated the robustness of quantized models against various noises (adversarial attacks, natural corruptions, and systematic noises) on ImageNet.
1 code implementation • 16 Sep 2021 • Shunchang Liu, Jiakai Wang, Aishan Liu, Yingwei Li, Yijie Gao, Xianglong Liu, DaCheng Tao
Crowd counting, which has been widely adopted for estimating the number of people in safety-critical scenes, is shown to be vulnerable to adversarial examples in the physical world (e. g., adversarial patches).
1 code implementation • CVPR 2021 • Jiakai Wang, Aishan Liu, Zixin Yin, Shunchang Liu, Shiyu Tang, Xianglong Liu
Deep learning models are vulnerable to adversarial examples.