Towards Generalizable Personalized Federated Learning with Adaptive Local Adaptation

29 Sep 2021  ·  Sijia Chen, Baochun Li ·

Personalized federated learning aims to find a shared global model that can be adapted to meet personal needs on each individual device. Starting from such a shared initial model, devices should be able to easily adapt to their local dataset to obtain personalized models. However, we find that existing works cannot generalize well on non-iid scenarios with different heterogeneity degrees of the underlying data distribution among devices. Thus, it is challenging for these methods to train a suitable global model to effectively induce high-quality personalized models without changing learning objectives. In this paper, we point out that this issue can be addressed by balancing information flow from the initial model and training dataset to the local adaptation. We then prove a theorem referred to as the {\em adaptive trade-off theorem}, showing adaptive local adaptation is equivalent to optimizing such information flow based on the information theory. With these theoretical insights, we propose a new framework called {\em adaptive federated meta-learning} (AFML), designed to achieve generalizable personalized federated learning that maintains solid performance under non-IID data scenarios with different degrees of diversity among devices. We test AFML in an extensive set of these non-IID data scenarios, with both CIFAR-100 and Shakespeare datasets. Experimental results demonstrate that AFML can maintain the highest personalized accuracy compared to alternative leading frameworks, yet with a minimal number of communication rounds and local updates needed.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods