FedAgg: Adaptive Federated Learning with Aggregated Gradients

28 Mar 2023  ·  Wenhao Yuan, Xuehe Wang ·

Federated Learning (FL) has emerged as a pivotal paradigm within distributed model training, facilitating collaboration among multiple devices to refine a shared model, harnessing their respective datasets as orchestrated by a central server, while ensuring the localization of private data. Nonetheless, the non-independent-and-identically-distributed (Non-IID) data generated on heterogeneous clients and the incessant information exchange among participants may markedly impede training efficacy and retard the convergence rate. In this paper, we refine the conventional stochastic gradient descent (SGD) methodology by introducing aggregated gradients at each local training epoch and propose an adaptive learning rate iterative algorithm that concerns the divergence between local and average parameters. To surmount the obstacle that acquiring other clients' local information, we introduce the mean-field approach by leveraging two mean-field terms to approximately estimate the average local parameters and gradients over time in a manner that precludes the need for local information exchange among clients and design the decentralized adaptive learning rate for each client. Through meticulous theoretical analysis, we provide a robust convergence guarantee for our proposed algorithm and ensure its wide applicability. Our numerical experiments substantiate the superiority of our framework in comparison with existing state-of-the-art FL strategies for enhancing model performance and accelerating convergence rate under IID and Non-IID data distributions.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here