no code implementations • ICML 2020 • Yanwei Fu, Chen Liu, Donghao Li, Xinwei Sun, Jinshan Zeng, Yuan YAO
Over-parameterization is ubiquitous nowadays in training neural networks to benefit both optimization in seeking global optima and generalization in reducing prediction error.
no code implementations • 3 Oct 2023 • Mingzhou Liu, Xinwei Sun, Ching-Wen Lee, Yu Qiao, Yizhou Wang
In particular, we utilize the counterfactual generation's ability for causal attribution to introduce a novel loss called the CounterFactual Alignment (CF-Align) loss.
no code implementations • 29 Sep 2023 • Yong Wu, Mingzhou Liu, Jing Yan, Yanwei Fu, Shouyan Wang, Yizhou Wang, Xinwei Sun
To accommodate these scenarios, we consider a new setting dubbed as multiple treatments and multiple outcomes.
1 code implementation • 22 Sep 2023 • Yong Wu, Yanwei Fu, Shouyan Wang, Xinwei Sun
To address these challenges, we propose a kernel-based DR estimator that can well handle continuous treatments.
no code implementations • 4 Aug 2023 • Wang Lu, Jindong Wang, Xinwei Sun, Yiqiang Chen, Xiangyang Ji, Qiang Yang, Xing Xie
We propose DIVERSIFY, a general framework, for OOD detection and generalization on dynamic distributions of time series.
1 code implementation • NeurIPS 2023 • Mingzhou Liu, Xinwei Sun, Lingjing Hu, Yizhou Wang
Based on these, we can leverage the proxies to remove the bias induced by the hidden variables and hence achieve identifiability.
1 code implementation • 9 May 2023 • Mingzhou Liu, Xinwei Sun, Yu Qiao, Yizhou Wang
Distinguishing causal connections from correlations is important in many scenarios.
no code implementations • 26 Mar 2023 • Xuelin Qian, Yikai Wang, Yanwei Fu, Xinwei Sun, xiangyang xue, Jianfeng Feng
Our Latent Embedding Alignment (LEA) model concurrently recovers visual stimuli from fMRI signals and predicts brain activity from images within a unified framework.
1 code implementation • 2 Jan 2023 • Yikai Wang, Yanwei Fu, Xinwei Sun
While Knockoffs-SPR can be regarded as a sample selection module for a standard supervised training pipeline, we further combine it with a semi-supervised algorithm to exploit the support of noisy data as unlabeled data.
Ranked #1 on Learning with noisy labels on Clothing1M
no code implementations • 29 Nov 2022 • Chengming Xu, Chen Liu, Xinwei Sun, Siqian Yang, Yabiao Wang, Chengjie Wang, Yanwei Fu
We theoretically show that such an augmentation mechanism, different from existing ones, is able to identify the causal features.
1 code implementation • 15 Sep 2022 • Wang Lu, Jindong Wang, Xinwei Sun, Yiqiang Chen, Xing Xie
Time series classification is an important problem in real world.
no code implementations • 21 Apr 2022 • Churan Wang, Jing Li, Xinwei Sun, Fandong Zhang, Yizhou Yu, Yizhou Wang
To resolve this problem, we propose a novel framework, namely Domain Invariant Model with Graph Convolutional Network (DIM-GCN), which only exploits invariant disease-related features from multiple domains.
1 code implementation • CVPR 2022 • Yikai Wang, Xinwei Sun, Yanwei Fu
Noisy training set usually leads to the degradation of generalization and robustness of neural networks.
Ranked #4 on Learning with noisy labels on Clothing1M
1 code implementation • NeurIPS 2021 • Xinwei Sun, Botong Wu, Xiangyu Zheng, Chang Liu, Wei Chen, Tao Qin, Tie-Yan Liu
To avoid such a spurious correlation, we propose \textbf{La}tent \textbf{C}ausal \textbf{I}nvariance \textbf{M}odels (LaCIM) that specifies the underlying causal structure of the data and the source of distributional shifts, guiding us to pursue only causal factor for prediction.
no code implementations • 8 Oct 2021 • Mingzhou Liu, Xinwei Sun, Fandong Zhang, Yizhou Yu, Yizhou Wang
Finally, to implement this contextual posterior, we introduce a Transformer that takes the object's information as a reference and locates correlated contextual factors.
no code implementations • 29 Sep 2021 • Wang Lu, Jindong Wang, Yiqiang Chen, Xinwei Sun
In this paper, we propose to view the time series classification problem from the distribution perspective.
no code implementations • 29 Sep 2021 • Yikai Wang, Xinwei Sun, Yanwei Fu
Specifically, we re-purpose a sparse linear model with incidental parameters as a unified Relative Instance Credibility Inference (RICI) framework, which will detect and remove outliers in the forward pass of each mini-batch and use the remaining instances to train the network.
1 code implementation • 5 Jul 2021 • Mingzhou Liu, Xiangyu Zheng, Xinwei Sun, Fang Fang, Yizhou Wang
When this condition fails, we surprisingly find with an example that this whole stable set, although can fully exploit stable information, is not the optimal one to transfer.
no code implementations • CVPR 2021 • Jing Li, Botong Wu, Xinwei Sun, Yizhou Wang
We propose a causal hidden Markov model to achieve robust prediction of irreversible disease at an early stage, which is safety-critical and vital for medical treatment in early stages.
no code implementations • CVPR 2021 • Botong Wu, Sijie Ren, Jing Li, Xinwei Sun, Shiming Li, Yizhou Wang
In order to account for the degree of progression of the disease, we propose a temporal generative model to accurately generate the future image and compare it with the current one to get a residual image.
no code implementations • 19 Dec 2020 • Xinwei Sun, Botong Wu, Wei Chen
To learn such an invariance for deepfake detection, our InTeLe introduces an auto-encoder framework with different decoders for pristine and fake images, which are further appended with a shallow classifier in order to separate out the obvious artifact-effect.
no code implementations • 4 Nov 2020 • Xinwei Sun, Botong Wu, Xiangyu Zheng, Chang Liu, Wei Chen, Tao Qin, Tie-Yan Liu
To avoid spurious correlation, we propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
1 code implementation • NeurIPS 2021 • Chang Liu, Xinwei Sun, Jindong Wang, Haoyue Tang, Tao Li, Tao Qin, Wei Chen, Tie-Yan Liu
Conventional supervised learning methods, especially deep ones, are found to be sensitive to out-of-distribution (OOD) examples, largely because the learned representation mixes the semantic factor with the variation factor due to their domain-specific correlation, while only the semantic factor causes the output.
no code implementations • 30 Sep 2020 • Chu-ran Wang, Jing Li, Fandong Zhang, Xinwei Sun, Hao Dong, Yizhou Yu, Yizhou Wang
Mammogram benign or malignant classification with only image-level labels is challenging due to the absence of lesion annotations.
no code implementations • 17 Jul 2020 • Xinwei Sun, Wenjing Han, Lingjing Hu, Yuan YAO, Yizhou Wang
Specifically, with a variable the splitting term, two estimators are introduced and split apart, i. e. one is for feature selection (the sparse estimator) and the other is for prediction (the dense estimator).
no code implementations • ECCV 2020 • Xinwei Sun, Yilun Xu, Peng Cao, Yuqing Kong, Lingjing Hu, Shanghang Zhang, Yizhou Wang
In this paper, we propose a novel information-theoretic approach, namely \textbf{T}otal \textbf{C}orrelation \textbf{G}ain \textbf{M}aximization (TCGM), for semi-supervised multi-modal learning, which is endowed with promising properties: (i) it can utilize effectively the information across different modalities of unlabeled data points to facilitate training classifiers of each modality (ii) it has theoretical guarantee to identify Bayesian classifiers, i. e., the ground truth posteriors of all modalities.
1 code implementation • 4 Jul 2020 • Yanwei Fu, Chen Liu, Donghao Li, Xinwei Sun, Jinshan Zeng, Yuan YAO
Over-parameterization is ubiquitous nowadays in training neural networks to benefit both optimization in seeking global optima and generalization in reducing prediction error.
1 code implementation • NeurIPS 2019 • Qianqian Xu, Xinwei Sun, Zhiyong Yang, Xiaochun Cao, Qingming Huang, Yuan YAO
In this paper, instead of learning a global ranking which is agreed with the consensus, we pursue the tie-aware partial ranking from an individualized perspective.
no code implementations • 25 Sep 2019 • Yanwei Fu, Chen Liu, Donghao Li, Xinwei Sun, Jinshan Zeng, Yuan YAO
Over-parameterization is ubiquitous nowadays in training neural networks to benefit both optimization in seeking global optima and generalization in reducing prediction error.
1 code implementation • 23 May 2019 • Yanwei Fu, Chen Liu, Donghao Li, Zuyuan Zhong, Xinwei Sun, Jinshan Zeng, Yuan YAO
To fill in this gap, this paper proposes a new approach based on differential inclusions of inverse scale spaces, which generate a family of models from simple to complex ones along the dynamics via coupling a pair of parameters, such that over-parameterized deep models and their structural sparsity can be explored simultaneously.
no code implementations • ICLR 2019 • Yanwei Fu, Shun Zhang, Donghao Li, Xinwei Sun, xiangyang xue, Yuan YAO
This paper proposes a Pruning in Training (PiT) framework of learning to reduce the parameter size of networks.
no code implementations • 24 Apr 2019 • Yanwei Fu, Donghao Li, Xinwei Sun, Shun Zhang, Yizhou Wang, Yuan YAO
This paper proposes a novel Stochastic Split Linearized Bregman Iteration ($S^{2}$-LBI) algorithm to efficiently train the deep network.
no code implementations • 29 Jul 2018 • Qianqian Xu, Jiechao Xiong, Xinwei Sun, Zhiyong Yang, Xiaochun Cao, Qingming Huang, Yuan YAO
A preference order or ranking aggregated from pairwise comparison data is commonly understood as a strict total order.
no code implementations • ICML 2018 • Bo Zhao, Xinwei Sun, Yanwei Fu, Yuan YAO, Yizhou Wang
To solve this task, $L_{1}$ regularization is widely used for the pursuit of feature selection and avoiding overfitting, and yet the sparse estimation of features in $L_{1}$ regularization may cause the underfitting of training data.
no code implementations • 20 Nov 2017 • Bo Zhao, Xinwei Sun, Yuan YAO, Yizhou Wang
With the learned SRG, each unseen class prototype (cluster center) in the image feature space can be synthesized by the linear combination of other class prototypes, so that testing instances can be classified based on the distance to these synthesized prototypes.
no code implementations • 16 Apr 2017 • Chendi Huang, Xinwei Sun, Jiechao Xiong, Yuan YAO
Boosting as gradient descent algorithms is one popular method in machine learning.
no code implementations • NeurIPS 2016 • Chendi Huang, Xinwei Sun, Jiechao Xiong, Yuan YAO
An iterative regularization path with structural sparsity is proposed in this paper based on variable splitting and the Linearized Bregman Iteration, hence called \emph{Split LBI}.