Greedy Learning for Large-Scale Neural MRI Reconstruction

Model-based deep learning approaches have recently shown state-of-the-art performance for accelerated MRI reconstruction. These methods unroll iterative proximal gradient descent by alternating between data-consistency and a neural-network based proximal operation. However, they demand several unrolled iterations with sufficiently expressive proximals for high resolution and multi-dimensional imaging (e.g., 3D MRI). This impedes traditional training via backpropagation due to prohibitively intensive memory and compute needed to calculate gradients and store intermediate activations per layer. To address this challenge, we advocate an alternative training method by greedily relaxing the objective. We split the end-to-end network into decoupled network modules, and optimize each network module separately, thereby avoiding the need to compute costly end-to-end gradients. We empirically demonstrate that the proposed greedy learning method requires 6x less memory with no additional computations, while generalizing slightly better than backpropagation.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here