Paper

Model-based Federated Learning for Accurate MR Image Reconstruction from Undersampled k-space Data

Deep learning-based methods have achieved encouraging performances in the field of magnetic resonance (MR) image reconstruction. Nevertheless, to properly learn a powerful and robust model, these methods generally require large quantities of data, the collection of which from multiple centers may cause ethical and data privacy violation issues. Lately, federated learning has served as a promising solution to exploit multi-center data while getting rid of the data transfer between institutions. However, high heterogeneity exists in the data from different centers, and existing federated learning methods tend to use average aggregation methods to combine the client's information, which limits the performance and generalization capability of the trained models. In this paper, we propose a Model-based Federated learning framework (ModFed). ModFed has three major contributions: 1) Different from the existing data-driven federated learning methods, model-driven neural networks are designed to relieve each client's dependency on large data; 2) An adaptive dynamic aggregation scheme is proposed to address the data heterogeneity issue and improve the generalization capability and robustness the trained neural network models; 3) A spatial Laplacian attention mechanism and a personalized client-side loss regularization are introduced to capture the detailed information for accurate image reconstruction. ModFed is evaluated on three in-vivo datasets. Experimental results show that ModFed has strong capability in improving image reconstruction quality and enforcing model generalization capability when compared to the other five state-of-the-art federated learning approaches. Codes will be made available at https://github.com/ternencewu123/ModFed.

Results in Papers With Code
(↓ scroll down to see all results)