1 code implementation • 22 Apr 2024 • Yao Wan, Guanghua Wan, Shijie Zhang, Hongyu Zhang, Yulei Sui, Pan Zhou, Hai Jin, Lichao Sun
Subsequently, the membership classifier can be effectively employed to deduce the membership status of a given code sample based on the output of a target code completion model.
no code implementations • 20 Mar 2024 • Shijie Zhang, Boyan Jiang, Keke He, Junwei Zhu, Ying Tai, Chengjie Wang, yinda zhang, Yanwei Fu
Pixel2Mesh (P2M) is a classical approach for reconstructing 3D shapes from a single color image through coarse-to-fine mesh deformation.
no code implementations • 19 Mar 2024 • Yufei Liu, Junwei Zhu, Junshu Tang, Shijie Zhang, Jiangning Zhang, Weijian Cao, Chengjie Wang, Yunsheng Wu, Dongjin Huang
Texturing 3D humans with semantic UV maps remains a challenge due to the difficulty of acquiring reasonably unfolded UV.
no code implementations • 23 Nov 2023 • Ju Kang, Shijie Zhang, Yiyuan Niu, Xin Wang
What determines biodiversity in nature is a prominent issue in ecology, especially in biotic resource systems that are typically devoid of cross-feeding.
no code implementations • 24 Aug 2023 • Shijie Zhang, Xin Yan, Xuejiao Yang, Binfeng Jia, Shuangyang Wang
In ExpLTV, we first innovatively design a deep neural network-based game whale detector that can not only infer the intrinsic order in accordance with monetary value, but also precisely identify high spenders (i. e., game whales) and low spenders.
1 code implementation • 30 Nov 2022 • Chengming Xu, Chen Liu, Siqian Yang, Yabiao Wang, Shijie Zhang, Lijie Jia, Yanwei Fu
Since only part of the most confident positive samples are available and evidence is not enough to categorize the rest samples, many of these unlabeled data may also be the positive samples.
no code implementations • 20 Oct 2022 • Wei Yuan, Hongzhi Yin, Fangzhao Wu, Shijie Zhang, Tieke He, Hao Wang
It removes a user's contribution by rolling back and calibrating the historical parameter updates and then uses these updates to speed up federated recommender reconstruction.
no code implementations • 24 May 2022 • Shijie Zhang, Wei Yuan, Hongzhi Yin
In this paper, we first design a novel attribute inference attacker to perform a comprehensive privacy analysis of the state-of-the-art federated recommender models.
no code implementations • 24 Mar 2022 • Shijie Zhang, Lanjun Wang, Lian Ding, An-An Liu, Senhua Zhu, Dandan Tu
However, scientists and practitioners are difficult to identify implicit biases in the datasets, which causes lack of reliable unbias test datasets to valid models.
no code implementations • 9 Dec 2021 • Ju Kang, Shijie Zhang, Yiyuan Niu, Fan Zhong, Xin Wang
Explaining biodiversity is a fundamental issue in ecology.
no code implementations • 21 Oct 2021 • Shijie Zhang, Hongzhi Yin, Tong Chen, Zi Huang, Quoc Viet Hung Nguyen, Lizhen Cui
Evaluations on two real-world datasets show that 1) our attack model significantly boosts the exposure rate of the target item in a stealthy way, without harming the accuracy of the poisoned recommender; and 2) existing defenses are not effective enough, highlighting the need for new defenses against our local model poisoning attacks to federated recommender systems.
no code implementations • 29 Jan 2021 • Shijie Zhang, Hongzhi Yin, Tong Chen, Zi Huang, Lizhen Cui, Xiangliang Zhang
Specifically, in GERAI, we bind the information perturbation mechanism in differential privacy with the recommendation capability of graph convolutional networks.
1 code implementation • 20 May 2020 • Shijie Zhang, Hongzhi Yin, Tong Chen, Quoc Viet Nguyen Hung, Zi Huang, Lizhen Cui
Therefore, it is of great practical significance to construct a robust recommender system that is able to generate stable recommendations even in the presence of shilling attacks.
no code implementations • 10 Jul 2019 • Tianxiu Yu, Shijie Zhang, Cong Lin, ShaoDi You, Jian Wu, Jiawan Zhang, Xiaohong Ding, Huili An
Follow the trend, we release the first public dataset for Dunhuang Grotto Painting restoration.
no code implementations • 20 Dec 2016 • Shijie Zhang, Lizhen Qu, ShaoDi You, Zhenglu Yang, Jiawan Zhang
In this paper, we propose the first model to be able to generate visually grounded questions with diverse types for a single image.