Backdoor attacks and defenses in feature-partitioned collaborative learning

7 Jul 2020  ·  Yang Liu, Zhihao Yi, Tianjian Chen ·

Since there are multiple parties in collaborative learning, malicious parties might manipulate the learning process for their own purposes through backdoor attacks. However, most of existing works only consider the federated learning scenario where data are partitioned by samples. The feature-partitioned learning can be another important scenario since in many real world applications, features are often distributed across different parties. Attacks and defenses in such scenario are especially challenging when the attackers have no labels and the defenders are not able to access the data and model parameters of other participants. In this paper, we show that even parties with no access to labels can successfully inject backdoor attacks, achieving high accuracy on both main and backdoor tasks. Next, we introduce several defense techniques, demonstrating that the backdoor can be successfully blocked by a combination of these techniques without hurting main task accuracy. To the best of our knowledge, this is the first systematical study to deal with backdoor attacks in the feature-partitioned collaborative learning framework.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here