Search Results for author: Shikun Li

Found 8 papers, 6 papers with code

DANCE: Dual-View Distribution Alignment for Dataset Condensation

no code implementations3 Jun 2024 Hansong Zhang, Shikun Li, Fanzhao Lin, Weiping Wang, Zhenxing Qian, Shiming Ge

Specifically, from the inner-class view, we construct multiple "middle encoders" to perform pseudo long-term distribution alignment, making the condensed set a good proxy of the real one during the whole training process; while from the inter-class view, we use the expert models to perform distribution calibration, ensuring the synthetic data remains in the real class region during condensing.

Dataset Condensation

M3D: Dataset Condensation by Minimizing Maximum Mean Discrepancy

3 code implementations26 Dec 2023 Hansong Zhang, Shikun Li, Pengju Wang, Dan Zeng, Shiming Ge

Nowadays, optimization-oriented methods have been the primary method in the field of dataset condensation for achieving SOTA results.

Dataset Condensation

Coupled Confusion Correction: Learning from Crowds with Sparse Annotations

2 code implementations12 Dec 2023 Hansong Zhang, Shikun Li, Dan Zeng, Chenggang Yan, Shiming Ge

Moreover, we cluster the ``annotator groups'' who share similar expertise so that their confusion matrices could be corrected together.

Multi-Label Noise Transition Matrix Estimation with Label Correlations: Theory and Algorithm

1 code implementation22 Sep 2023 Shikun Li, Xiaobo Xia, Hansong Zhang, Shiming Ge, Tongliang Liu

However, estimating multi-label noise transition matrices remains a challenging task, as most existing estimators in noisy multi-class learning rely on anchor points and accurate fitting of noisy class posteriors, which is hard to satisfy in noisy multi-label learning.

Multi-Label Learning

Transferring Annotator- and Instance-dependent Transition Matrix for Learning from Crowds

1 code implementation5 Jun 2023 Shikun Li, Xiaobo Xia, Jiankang Deng, Shiming Ge, Tongliang Liu

In real-world crowd-sourcing scenarios, noise transition matrices are both annotator- and instance-dependent.

Transfer Learning

Trustable Co-label Learning from Multiple Noisy Annotators

1 code implementation8 Mar 2022 Shikun Li, Tongliang Liu, Jiyong Tan, Dan Zeng, Shiming Ge

This raises the following important question: how can we effectively use a small amount of trusted data to facilitate robust classifier learning from multiple annotators?

Selective-Supervised Contrastive Learning with Noisy Labels

1 code implementation CVPR 2022 Shikun Li, Xiaobo Xia, Shiming Ge, Tongliang Liu

In the selection process, by measuring the agreement between learned representations and given labels, we first identify confident examples that are exploited to build confident pairs.

Contrastive Learning Learning with noisy labels +1

Student Network Learning via Evolutionary Knowledge Distillation

no code implementations23 Mar 2021 Kangkai Zhang, Chunhui Zhang, Shikun Li, Dan Zeng, Shiming Ge

Inspired by that, we propose an evolutionary knowledge distillation approach to improve the transfer effectiveness of teacher knowledge.

Knowledge Distillation Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.