no code implementations • 28 Apr 2022 • Jiang Liu, Srivathsa Pasumarthi, Ben Duffy, Enhao Gong, Keshav Datta, Greg Zaharchuk
In this work, we formulate missing data imputation as a sequence-to-sequence learning problem and propose a multi-contrast multi-scale Transformer (MMT), which can take any subset of input contrasts and synthesize those that are missing.
no code implementations • 8 Mar 2021 • Ke Wang, Enhao Gong, Yuxin Zhang, Suchadrima Banerjee, Greg Zaharchuk, John Pauly
Multi-contrast Magnetic Resonance Imaging (MRI) acquisitions from a single scan have tremendous potential to streamline exams and reduce imaging time.
no code implementations • ICLR 2019 • Jiahong Ouyang, Guanhua Wang, Enhao Gong, Kevin Chen, John Pauly and Greg Zaharchuk
Deep Learning (DL) algorithms based on Generative Adversarial Network (GAN) have demonstrated great potentials in computer vision tasks such as image restoration.
1 code implementation • 15 Mar 2018 • Jaeyeon Yoon, Enhao Gong, Itthi Chatnuntawech, Berkin Bilgic, Jingu Lee, Woojin Jung, Jingyu Ko, Hosan Jung, Kawin Setsompop, Greg Zaharchuk, Eung Yeop Kim, John Pauly, Jong-Ho Lee
The QSMnet maps of the test dataset were compared with those from TKD and MEDI for image quality and consistency in multiple head orientations.
Image and Video Processing
no code implementations • 12 Dec 2017 • Junshen Xu, Enhao Gong, John Pauly, Greg Zaharchuk
Experiments shows the proposed method can reconstruct low-dose PET image to a standard-dose quality with only two-hundredth dose.
2 code implementations • 31 May 2017 • Morteza Mardani, Enhao Gong, Joseph Y. Cheng, Shreyas Vasanawala, Greg Zaharchuk, Marcus Alley, Neil Thakur, Song Han, William Dally, John M. Pauly, Lei Xing
A multilayer convolutional neural network is then jointly trained based on diagnostic quality images to discriminate the projection quality.
2 code implementations • 15 Jul 2016 • Song Han, Jeff Pool, Sharan Narang, Huizi Mao, Enhao Gong, Shijian Tang, Erich Elsen, Peter Vajda, Manohar Paluri, John Tran, Bryan Catanzaro, William J. Dally
We propose DSD, a dense-sparse-dense training flow, for regularizing deep neural networks and achieving better optimization performance.