Paper

Super Images -- A New 2D Perspective on 3D Medical Imaging Analysis

In medical imaging analysis, deep learning has shown promising results. We frequently rely on volumetric data to segment medical images, necessitating the use of 3D architectures, which are commended for their capacity to capture interslice context. However, because of the 3D convolutions, max pooling, up-convolutions, and other operations utilized in these networks, these architectures are often more inefficient in terms of time and computation than their 2D equivalents. Furthermore, there are few 3D pretrained model weights, and pretraining is often difficult. We present a simple yet effective 2D method to handle 3D data while efficiently embedding the 3D knowledge during training. We propose transforming volumetric data into 2D super images and segmenting with 2D networks to solve these challenges. Our method generates a super-resolution image by stitching slices side by side in the 3D image. We expect deep neural networks to capture and learn these properties spatially despite losing depth information. This work aims to present a novel perspective when dealing with volumetric data, and we test the hypothesis using CNN and ViT networks as well as self-supervised pretraining. While attaining equal, if not superior, results to 3D networks utilizing only 2D counterparts, the model complexity is reduced by around threefold. Because volumetric data is relatively scarce, we anticipate that our approach will entice more studies, particularly in medical imaging analysis.

Results in Papers With Code
(↓ scroll down to see all results)