1 code implementation • 23 Jul 2022 • Chen Wei, Shenghan Ren, Kaitai Guo, Haihong Hu, Jimin Liang
Most of the existing Transformer-based networks for medical image segmentation are U-Net-like architecture that contains an encoder that utilizes a sequence of Transformer blocks to convert the input medical image from high-resolution representation into low-resolution feature maps and a decoder that gradually recovers the high-resolution representation from low-resolution feature maps.
1 code implementation • 31 Oct 2020 • Chen Wei, Yiping Tang, Chuang Niu, Haihong Hu, Yue Wang, Jimin Liang
To enhance the predictive performance of neural predictors, we devise two self-supervised learning methods from different perspectives to pre-train the architecture embedding part of neural predictors to generate a meaningful representation of neural architectures.
no code implementations • 8 Jul 2020 • Chuang Niu, Wenxiang Cong, Fenglei Fan, Hongming Shan, Mengzhou Li, Jimin Liang, Ge Wang
Deep neural network based methods have achieved promising results for CT metal artifact reduction (MAR), most of which use many synthesized paired images for training.
1 code implementation • 28 Mar 2020 • Chen Wei, Chuang Niu, Yiping Tang, Yue Wang, Haihong Hu, Jimin Liang
In this paper, we propose a neural predictor guided evolutionary algorithm to enhance the exploration ability of EA for NAS (NPENAS) and design two kinds of neural predictors.
1 code implementation • ECCV 2020 • Chuang Niu, Jun Zhang, Ge Wang, Jimin Liang
To train the GATCluster in a completely unsupervised manner, we design four self-learning tasks with the constraints of transformation invariance, separability maximization, entropy analysis, and attention mapping.
no code implementations • 18 Oct 2019 • Yiping Tang, Chuang Niu, Minghao Dong, Shenghan Ren, Jimin Liang
Many of the state-of-the-art methods predict the boundaries of action instances based on predetermined anchors akin to the two-dimensional object detection detectors.
no code implementations • 17 Sep 2018 • Chuang Niu, Shenghan Ren, Jimin Liang
Pixel-level annotation demands expensive human efforts and limits the performance of deep networks that usually benefits from more such training data.