no code implementations • 21 Apr 2024 • Shitong Shao, Zikai Zhou, Huanran Chen, Zhiqiang Shen
Dataset condensation, a concept within data-centric learning, efficiently transfers critical attributes from an original dataset to a synthetic version, maintaining both diversity and realism.
2 code implementations • 11 Apr 2024 • Muxin Zhou, Zeyuan Yin, Shitong Shao, Zhiqiang Shen
In this work, we consider addressing this task through the new lens of model informativeness in the compression stage on the original dataset pretraining.
1 code implementation • 4 Feb 2024 • Huanran Chen, Yinpeng Dong, Shitong Shao, Zhongkai Hao, Xiao Yang, Hang Su, Jun Zhu
Diffusion models are recently employed as generative classifiers for robust classification.
no code implementations • 3 Feb 2024 • Shitong Shao, Zhiqiang Shen, Linrui Gong, Huanran Chen, Xu Dai
We name this framework Knowledge Transfer with Flow Matching (FM-KT), which can be integrated with a metric-based distillation method with any form (\textit{e. g.} vanilla KD, DKD, PKD and DIST) and a meta-encoder with any available architecture (\textit{e. g.} CNN, MLP and Transformer).
1 code implementation • 22 Jan 2024 • Zikai Zhou, Yunhang Shen, Shitong Shao, Linrui Gong, Shaohui Lin
This paper first provides a theoretical perspective to illustrate the effectiveness of CKA, which decouples CKA to the upper bound of Maximum Mean Discrepancy~(MMD) and a constant term.
1 code implementation • 29 Nov 2023 • Shitong Shao, Zeyuan Yin, Muxin Zhou, Xindong Zhang, Zhiqiang Shen
We call this perspective "generalized matching" and propose Generalized Various Backbone and Statistical Matching (G-VBSM) in this work, which aims to create a synthetic dataset with densities, ensuring consistency with the complete dataset across various backbones, layers, and statistics.
1 code implementation • 18 May 2023 • Shitong Shao, Xu Dai, Shouyi Yin, Lujun Li, Huanran Chen, Yang Hu
On CIFAR-10, we obtain a FID of 2. 80 by sampling in 15 steps under one-session training and the new state-of-the-art FID of 3. 37 by sampling in one step with additional training.
no code implementations • 13 May 2023 • Shuai Wang, Daoan Zhang, Zipei Yan, Shitong Shao, Rui Li
In Stage \uppercase\expandafter{\romannumeral1}, we train the target model from scratch with soft pseudo-labels generated by the source model in a knowledge distillation manner.
no code implementations • 7 May 2023 • Zhen Huang, Han Li, Shitong Shao, Heqin Zhu, Huijie Hu, Zhiwei Cheng, Jianji Wang, S. Kevin Zhou
The pelvis, the lower part of the trunk, supports and balances the trunk.
1 code implementation • 26 Apr 2023 • Shitong Shao, Xiaohan Yuan, Zhen Huang, Ziming Qiu, Shuai Wang, Kevin Zhou
Based on this insight, we propose an approach called DiffuseExpand for expanding datasets for 2D medical image segmentation using DPM, which first samples a variety of masks from Gaussian noise to ensure the diversity, and then synthesizes images to ensure the alignment of images and masks.
no code implementations • 19 Feb 2023 • Wei Li, Weiyan Liu, Shitong Shao, Shiyi Huang
The results show that AIIR-MIX can dynamically assign each agent a real-time intrinsic reward in accordance with their actual contribution.
no code implementations • 11 Dec 2022 • Shitong Shao, Huanran Chen, Zhen Huang, Linrui Gong, Shuai Wang, Xinxiao wu
To be specific, we design a neural network-based data augmentation module with priori bias, which assists in finding what meets the teacher's strengths but the student's weaknesses, by learning magnitudes and probabilities to generate suitable data samples.
1 code implementation • 18 Sep 2022 • Huanran Chen, Shitong Shao, Ziyi Wang, Zirui Shang, Jin Chen, Xiaofeng Ji, Xinxiao wu
Domain generalization aims to learn a model that can generalize well on the unseen test dataset, i. e., out-of-distribution data, which has different distribution from the training dataset.