Quantification of the Leakage in Federated Learning

12 Oct 2019  ·  Zhaorui Li, Zhicong Huang, Chaochao Chen, Cheng Hong ·

With the growing emphasis on users' privacy, federated learning has become more and more popular. Many architectures have been raised for a better security. Most architecture work on the assumption that data's gradient could not leak information. However, some work, recently, has shown such gradients may lead to leakage of the training data. In this paper, we discuss the leakage based on a federated approximated logistic regression model and show that such gradient's leakage could leak the complete training data if all elements of the inputs are either 0 or 1.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Cryptography and Security

Datasets


  Add Datasets introduced or used in this paper