Adaptive Parameterization of Deep Learning Models for Federated Learning

6 Feb 2023  ·  Morten From Elvebakken, Alexandros Iosifidis, Lukas Esterle ·

Federated Learning offers a way to train deep neural networks in a distributed fashion. While this addresses limitations related to distributed data, it incurs a communication overhead as the model parameters or gradients need to be exchanged regularly during training. This can be an issue with large scale distribution of learning tasks and negate the benefit of the respective resource distribution. In this paper, we we propose to utilise parallel Adapters for Federated Learning. Using various datasets, we show that Adapters can be incorporated to different Federated Learning techniques. We highlight that our approach can achieve similar inference performance compared to training the full model while reducing the communication overhead by roughly 90%. We further explore the applicability of Adapters in cross-silo and cross-device settings, as well as different non-IID data distributions.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here