1 code implementation • 26 Mar 2024 • Junhao Zheng, Chenhao Lin, Jiahao Sun, Zhengyu Zhao, Qian Li, Chao Shen
Deep learning-based monocular depth estimation (MDE), extensively applied in autonomous driving, is known to be vulnerable to adversarial attacks.
no code implementations • 27 Feb 2024 • Bo Yang, Hengwei Zhang, Jindong Wang, Yulong Yang, Chenhao Lin, Chao Shen, Zhengyu Zhao
Transferable adversarial examples cause practical security risks since they can mislead a target model without knowing its internal knowledge.
no code implementations • 12 Dec 2023 • Qiwei Tian, Chenhao Lin, Zhengyu Zhao, Qian Li, Chao Shen
Furthermore, CA prevents the consequential model collapse, based on a novel metric, collapseness, which is incorporated into the optimization of perturbation.
1 code implementation • 18 Oct 2023 • Zhengyu Zhao, Hanwei Zhang, Renjue Li, Ronan Sicre, Laurent Amsaleg, Michael Backes, Qi Li, Chao Shen
Transferable adversarial examples raise critical security concerns in real-world, black-box attack scenarios.
no code implementations • 11 Oct 2023 • Hai Huang, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang
Specifically, the VPPTaaS provider optimizes a visual prompt given downstream data, and downstream users can use this prompt together with the large pre-trained model for prediction.
1 code implementation • 11 Oct 2023 • Hai Huang, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang
Such a Composite Backdoor Attack (CBA) is shown to be stealthier than implanting the same multiple trigger keys in only a single component.
no code implementations • 3 Sep 2023 • Weijie Wang, Zhengyu Zhao, Nicu Sebe, Bruno Lepri
Although effective deepfake detectors have been proposed, they are substantially vulnerable to adversarial attacks.
no code implementations • 13 Jun 2023 • Yihan Ma, Zhengyu Zhao, Xinlei He, Zheng Li, Michael Backes, Yang Zhang
In particular, to help the watermark survive the subject-driven synthesis, we incorporate the synthesis process in learning GenWatermark by fine-tuning the detector with synthesized images for a specific subject.
1 code implementation • 31 Jan 2023 • Zhuoran Liu, Zhengyu Zhao, Martha Larson
Perturbative availability poisons (PAPs) add small changes to images to prevent their use for model training.
1 code implementation • 17 Nov 2022 • Zhengyu Zhao, Hanwei Zhang, Renjue Li, Ronan Sicre, Laurent Amsaleg, Michael Backes
In this work, we design good practices to address these limitations, and we present the first comprehensive evaluation of transfer attacks, covering 23 representative attacks against 9 defenses on ImageNet.
1 code implementation • 2 Nov 2022 • Dirren van Vlijmen, Alex Kolmus, Zhuoran Liu, Zhengyu Zhao, Martha Larson
We introduce ShortcutGen, a new data poisoning attack that generates sample-dependent, error-minimizing perturbations by learning a generator.
1 code implementation • 31 Aug 2022 • Yiyong Liu, Zhengyu Zhao, Michael Backes, Yang Zhang
Machine learning models are vulnerable to membership inference attacks in which an adversary aims to predict whether or not a particular sample was contained in the target model's training dataset.
1 code implementation • 3 Jun 2022 • Zhengyu Zhao, Nga Dang, Martha Larson
In this paper, we propose that adversarial images should be evaluated based on semantic mismatch, rather than label mismatch, as used in current work.
no code implementations • 30 May 2022 • Hamid Bostani, Zhengyu Zhao, Zhuoran Liu, Veelasha Moonsamy
Realistic attacks in the Android malware domain create Realizable Adversarial Examples (RealAEs), i. e., AEs that satisfy the domain constraints of Android malware.
1 code implementation • 25 Nov 2021 • Zhuoran Liu, Zhengyu Zhao, Alex Kolmus, Tijn Berns, Twan van Laarhoven, Tom Heskes, Martha Larson
Recent work has shown that imperceptible perturbations can be applied to craft unlearnable examples (ULEs), i. e. images whose content cannot be used to improve a classifier during training.
4 code implementations • NeurIPS 2021 • Zhengyu Zhao, Zhuoran Liu, Martha Larson
In particular, we, for the first time, identify that a simple logit loss can yield competitive results with the state of the arts.
1 code implementation • 12 Nov 2020 • Zhengyu Zhao, Zhuoran Liu, Martha Larson
In particular, our color filter space is explicitly specified so that we are able to provide a systematic analysis of model robustness against adversarial color transformations, from both the attack and defense perspectives.
1 code implementation • EMNLP 2020 • Haoyu Song, Yan Wang, Wei-Nan Zhang, Zhengyu Zhao, Ting Liu, Xiaojiang Liu
Maintaining a consistent attribute profile is crucial for dialogue agents to naturally converse with humans.
1 code implementation • 3 Feb 2020 • Zhengyu Zhao, Zhuoran Liu, Martha Larson
We introduce an approach that enhances images using a color filter in order to create adversarial effects, which fool neural networks into misclassification.
2 code implementations • CVPR 2020 • Zhengyu Zhao, Zhuoran Liu, Martha Larson
The success of image perturbations that are designed to fool image classifier is assessed in terms of both adversarial effect and visual imperceptibility.
1 code implementation • 29 Jan 2019 • Zhuoran Liu, Zhengyu Zhao, Martha Larson
An adversarial query is an image that has been modified to disrupt content-based image retrieval (CBIR) while appearing nearly untouched to the human eye.
1 code implementation • 23 Jul 2018 • Zhengyu Zhao, Martha Larson
As deep learning approaches to scene recognition emerge, they have continued to leverage discriminative regions at multiple scales, building on practices established by conventional image classification research.