no code implementations • 30 Mar 2024 • Renyang Liu, Kwok-Yan Lam, Wei Zhou, Sixing Wu, Jun Zhao, Dongting Hu, Mingming Gong
Many attack techniques have been proposed to explore the vulnerability of DNNs and further help to improve their robustness.
no code implementations • 12 Dec 2023 • Renyang Liu, Wei Zhou, Sixin Wu, Jun Zhao, Kwok-Yan Lam
Extensive studies have demonstrated that deep neural networks (DNNs) are vulnerable to adversarial attacks, which brings a huge security risk to the further application of DNNs, especially for the AI models developed in the real world.
no code implementations • 12 Dec 2023 • Renyang Liu, Wei Zhou, Xin Jin, Song Gao, Yuanyu Wang, Ruxin Wang
In generating adversarial examples, the conventional black-box attack methods rely on sufficient feedback from the to-be-attacked models by repeatedly querying until the attack is successful, which usually results in thousands of trials during an attack.
no code implementations • 25 Nov 2023 • Bingbing Song, Derui Wang, Tianwei Zhang, Renyang Liu, Yu Lin, Wei Zhou
Hence, it provides a way to directly generate stego images from secret images without a cover image.
no code implementations • 15 Oct 2023 • Renyang Liu, Jun Zhao, Xing Chu, Yu Liang, Wei Zhou, Jing He
With the rapid development of GPU (Graphics Processing Unit) technologies and neural networks, we can explore more appropriate data structures and algorithms.
no code implementations • 15 Oct 2023 • Renyang Liu, Wei Zhou, Jinhong Zhang, Xiaoyuan Liu, Peiyuan Si, Haoran Li
Inspired by this, we propose a novel model inversion attack method on HomoGNNs and HeteGNNs, namely HomoGMI and HeteGMI.
no code implementations • 15 Oct 2023 • Renyang Liu, Jinhong Zhang, Haoran Li, Jin Zhang, Yuanyu Wang, Wei Zhou
Extensive studies have demonstrated that deep neural networks (DNNs) are vulnerable to adversarial attacks.
no code implementations • 15 Oct 2023 • Renyang Liu, Jinhong Zhang, Kwok-Yan Lam, Jun Zhao, Wei Zhou
However, the distribution of these fake data lacks diversity and cannot detect the decision boundary of the target model well, resulting in the dissatisfactory simulation effect.
no code implementations • 11 Oct 2023 • Renyang Liu, Wei Zhou, Tianwei Zhang, Kangjie Chen, Jun Zhao, Kwok-Yan Lam
Existing black-box attacks have demonstrated promising potential in creating adversarial examples (AE) to deceive deep learning models.
no code implementations • 1 Jan 2021 • Bingbing Song, wei he, Renyang Liu, Shui Yu, Ruxin Wang, Mingming Gong, Tongliang Liu, Wei Zhou
Several state-of-the-arts start from improving the inter-class separability of training samples by modifying loss functions, where we argue that the adversarial samples are ignored and thus limited robustness to adversarial attacks is resulted.