no code implementations • 16 May 2024 • Linshan Hou, Ruili Feng, Zhongyun Hua, Wei Luo, Leo Yu Zhang, Yiming Li
Deep neural networks (DNNs) are vulnerable to backdoor attacks, where adversaries can maliciously trigger model misclassifications by implanting a hidden backdoor during model training.
no code implementations • 2 Apr 2024 • YuHang Zhou, Zhongyun Hua
In this paper, we discuss for the first time the concept of continual adversarial defense under a sequence of attacks, and propose a lifelong defense baseline called Anisotropic \& Isotropic Replay (AIR), which offers three advantages: (1) Isotropic replay ensures model consistency in the neighborhood distribution of new data, indirectly aligning the output preference between old and new tasks.
no code implementations • 2 Jul 2023 • Tao Wang, Yushu Zhang, Zixuan Yang, Hua Zhang, Zhongyun Hua
Massive captured face images are stored in the database for the identification of individuals.
no code implementations • 5 Sep 2022 • Kuiyuan Zhang, Zhongyun Hua, Yuanman Li, Yushu Zhang, Yicong Zhou
We develop a projection-based transformer block by integrating the prior projection knowledge of CS into the original transformer blocks, and then build a symmetrical reconstruction model using the projection-based transformer blocks and residual convolutional blocks.