1 code implementation • 29 Dec 2023 • Xiangyu Xiong, Yue Sun, Xiaohong Liu, Wei Ke, Chan-Tong Lam, Jiangang Chen, Mingfeng Jiang, Mingwei Wang, Hui Xie, Tong Tong, Qinquan Gao, Hao Chen, Tao Tan
Experimental results show that DisGAN consistently outperforms the GAN-based augmentation methods with explainable binary classification.
no code implementations • 7 Dec 2023 • Hui Xie, Weiyu Xu, Ya Xing Wang, John Buatti, Xiaodong Wu
To combine the strengths of both approaches, we propose in this study to integrate the graph-cut approach into a deep learning network for end-to-end learning.
no code implementations • 8 Oct 2022 • Hui Xie, Weiyu Xu, Xiaodong Wu
Unfortunately, due to the scarcity of training data in medical imaging, it is challenging for DL networks to learn the global structure of the target surfaces, including surface smoothness.
no code implementations • 6 Sep 2021 • Hui Xie, Zhuang Zhao, Jing Han, Yi Zhang, Lianfa Bai, Jun Lu
Various methods using CNNs have been developed in recent years to reconstruct HSIs, but most of the supervised deep learning methods aimed to fit a brute-force mapping relationship between the captured compressed image and standard HSIs.
no code implementations • 2 Jul 2020 • Hui Xie, Zhe Pan, Leixin Zhou, Fahim A Zaman, Danny Chen, Jost B Jonas, Yaxing Wang, Xiaodong Wu
In this work, we propose to parameterize the surface cost functions in the graph model and leverage DL to learn those parameters.
no code implementations • 25 May 2019 • Jirong Yi, Hui Xie, Leixin Zhou, Xiaodong Wu, Weiyu Xu, Raghuraman Mudumbai
In this paper, we present a simple hypothesis about a feature compression property of artificial intelligence (AI) classifiers and present theoretical arguments to show that this hypothesis successfully accounts for the observed fragility of AI classifiers to small adversarial perturbations.
no code implementations • 27 Jan 2019 • Hui Xie, Jirong Yi, Weiyu Xu, Raghu Mudumbai
We present a simple hypothesis about a compression property of artificial intelligence (AI) classifiers and present theoretical arguments to show that this hypothesis successfully accounts for the observed fragility of AI classifiers to small adversarial perturbations.