Paper

FediOS: Decoupling Orthogonal Subspaces for Personalization in Feature-skew Federated Learning

Personalized federated learning (pFL) enables collaborative training among multiple clients to enhance the capability of customized local models. In pFL, clients may have heterogeneous (also known as non-IID) data, which poses a key challenge in how to decouple the data knowledge into generic knowledge for global sharing and personalized knowledge for preserving local personalization. A typical way of pFL focuses on label distribution skew, and they adopt a decoupling scheme where the model is split into a common feature extractor and two prediction heads (generic and personalized). However, such a decoupling scheme cannot solve the essential problem of feature skew heterogeneity, because a common feature extractor cannot decouple the generic and personalized features. Therefore, in this paper, we rethink the architecture decoupling design for feature-skew pFL and propose an effective pFL method called FediOS. In FediOS, we reformulate the decoupling into two feature extractors (generic and personalized) and one shared prediction head. Orthogonal projections are used for clients to map the generic features into one common subspace and scatter the personalized features into different subspaces to achieve decoupling for them. In addition, a shared prediction head is trained to balance the importance of generic and personalized features during inference. Extensive experiments on four vision datasets demonstrate our method reaches state-of-the-art pFL performances under feature skew heterogeneity.

Results in Papers With Code
(↓ scroll down to see all results)