Search Results for author: Irene Wang

Found 2 papers, 1 papers with code

FLuID: Mitigating Stragglers in Federated Learning using Invariant Dropout

1 code implementation NeurIPS 2023 Irene Wang, Prashant J. Nair, Divya Mahajan

Building on this dropout technique, we develop an adaptive training framework, Federated Learning using Invariant Dropout (FLuID).

Federated Learning Model extraction

Reducing Impacts of System Heterogeneity in Federated Learning using Weight Update Magnitudes

no code implementations30 Aug 2022 Irene Wang

Invariant Dropout uses neuron updates from the non-straggler clients to develop a tailored sub-models for each straggler during each training iteration.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.