no code implementations • 6 Feb 2024 • Yihan Wang, Yifan Zhu, Xiao-Shan Gao
Availability attacks can prevent the unauthorized use of private data and commercial datasets by generating imperceptible noise and making unlearnable examples before release.
1 code implementation • 31 Jan 2024 • Shuang Liu, Yihan Wang, Xiao-Shan Gao
Unlearnable example attacks are data poisoning attacks aiming to degrade the clean test accuracy of deep learning by adding imperceptible perturbations to the training samples, which can be formulated as a bi-level optimization problem.
no code implementations • 6 Jan 2024 • Yihan Wang, Shuang Liu, Xiao-Shan Gao
Stability analysis is an essential aspect of studying the generalization ability of deep learning, as it involves deriving generalization bounds for stochastic gradient descent-based training algorithms.
1 code implementation • 14 Dec 2023 • Yifan Zhu, Lijia Yu, Xiao-Shan Gao
Detectability of unlearnable examples with simple networks motivates us to design a novel defense method.
no code implementations • 29 Jun 2023 • Yihan Wang, Lijia Yu, Xiao-Shan Gao
Invariance to spatial transformations such as translations and rotations is a desirable property and a basic design principle for classification neural networks.
no code implementations • 27 Oct 2022 • Yibo Miao, Yinpeng Dong, Jun Zhu, Xiao-Shan Gao
For naturalness, we constrain the adversarial example to be $\epsilon$-isometric to the original one by adopting the Gaussian curvature as a surrogate metric guaranteed by a theoretical analysis.
no code implementations • 17 Jul 2022 • Xiao-Shan Gao, Shuang Liu, Lijia Yu
Game theory has been used to answer some of the basic questions about adversarial deep learning such as the existence of a classifier with optimal robustness and the existence of optimal adversarial samples for a given class of classifiers.
no code implementations • 20 Mar 2022 • Lijia Yu, Yihan Wang, Xiao-Shan Gao
In this paper, a new parameter perturbation attack on DNNs, called adversarial parameter attack, is proposed, in which small perturbations to the parameters of the DNN are made such that the accuracy of the attacked DNN does not decrease much, but its robustness becomes much lower.
no code implementations • 8 Nov 2021 • Lijia Yu, Xiao-Shan Gao
The work is motivated by the fact that the bias part is a piecewise constant function with zero gradient and hence cannot be directly attacked by gradient-based methods to generate adversaries, such as FGSM.
1 code implementation • 30 Jun 2021 • Lijia Yu, Xiao-Shan Gao
In this paper, a robust classification-autoencoder (CAE) is proposed, which has strong ability to recognize outliers and defend adversaries.
no code implementations • 11 Mar 2021 • Laigang Guo, Chun-Ming Yuan, Xiao-Shan Gao
Recently, Savar\'{e}-Toscani proved that the R\'{e}nyi entropy power of general probability densities solving the $p$-nonlinear heat equation in $\mathbb{R}^n$ is always a concave function of time, which extends Costa's concavity inequality for Shannon's entropy power to R\'{e}nyi entropies.
Information Theory Information Theory
no code implementations • 3 Feb 2021 • Chen Zhao, Xiao-Shan Gao
In this paper, we propose a general scheme to analyze the gradient vanishing phenomenon, also known as the barren plateau phenomenon, in training quantum neural networks with the ZX-calculus.
no code implementations • 10 Oct 2020 • Lijia Yu, Xiao-Shan Gao
A lower bound for the robustness measure is given in terms of the $L_{2,\infty}$ norm.
1 code implementation • 29 Dec 2019 • Chen Zhao, Xiao-Shan Gao
In this paper, we introduce a quantum extension of classical DNN, QDNN.
1 code implementation • 18 Dec 2017 • Yu-Ao Chen, Xiao-Shan Gao
Decision of whether a Boolean equation system has a solution is an NPC problem and finding a solution is NP hard.
Quantum Physics Computational Complexity Cryptography and Security Symbolic Computation