1 code implementation • 9 May 2024 • Yixin Wu, Xinlei He, Pascal Berrang, Mathias Humbert, Michael Backes, Neil Zhenqiang Gong, Yang Zhang
This paper fills the gap by conducting a systematic privacy analysis of inductive GNNs through the lens of link stealing attacks, one of the most popular attacks that are specifically designed for GNNs.
no code implementations • 6 May 2024 • Yiting Qu, Xinyue Shen, Yixin Wu, Michael Backes, Savvas Zannettou, Yang Zhang
First, we curate a large dataset of 10K real-world and AI-generated images that are annotated as safe or unsafe based on a set of 11 unsafe categories of images (sexual, violent, hateful, etc.).
no code implementations • 18 May 2023 • Peihua Ma, Yixin Wu, Ning Yu, Yang Zhang, Michael Backes, Qin Wang, Cheng-I Wei
Nutrition information is crucial in precision nutrition and the food industry.
no code implementations • 3 Oct 2022 • Yixin Wu, Ning Yu, Zheng Li, Michael Backes, Yang Zhang
The empirical results show that all of the proposed attacks can achieve significant performance, in some cases even close to an accuracy of 1, and thus the corresponding risk is much more severe than that shown by existing membership inference attacks.
no code implementations • 20 Sep 2021 • Yixin Wu, Rui Luo, Chen Zhang, Jun Wang, Yaodong Yang
In this paper, we characterize the noise of stochastic gradients and analyze the noise-induced dynamics during training deep neural networks by gradient-based optimizers.
no code implementations • 10 Feb 2021 • Xinlei He, Rui Wen, Yixin Wu, Michael Backes, Yun Shen, Yang Zhang
To fully utilize the information contained in graph data, a new family of machine learning (ML) models, namely graph neural networks (GNNs), has been introduced.