no code implementations • 22 Feb 2024 • Jingyao Li, Pengguang Chen, Xuan Ju, Hong Xu, Jiaya Jia
Our research aims to bridge the domain gap between natural and artificial scenarios with efficient tuning strategies.
no code implementations • 5 Jan 2024 • Jingyao Li, Pengguang Chen, Shaozuo Yu, Shu Liu, Jiaya Jia
The crux of effective out-of-distribution (OOD) detection lies in acquiring a robust in-distribution (ID) representation, distinct from OOD samples.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
2 code implementations • 28 Dec 2023 • Zhongshen Zeng, Pengguang Chen, Shu Liu, Haiyun Jiang, Jiaya Jia
In this work, we introduce a novel evaluation paradigm for Large Language Models, one that challenges them to engage in meta-reasoning.
1 code implementation • 26 Dec 2023 • Jingyao Li, Pengguang Chen, Jiaya Jia
Large Language Models (LLMs) have showcased impressive capabilities in handling straightforward programming tasks.
Ranked #1 on Code Generation on CodeContests (Test Set pass@1 metric)
1 code implementation • 26 Dec 2023 • Jingyao Li, Pengguang Chen, Shaozuo Yu, Shu Liu, Jiaya Jia
Experimental results demonstrate that, when labeling 80% of the samples, the performance of the current SOTA method declines by 0. 74%, whereas our proposed BAL achieves performance comparable to the full dataset.
no code implementations • 26 Oct 2023 • Shuai Yang, Zhifei Chen, Pengguang Chen, Xi Fang, Shu Liu, Yingcong Chen
Defect inspection is paramount within the closed-loop manufacturing system.
1 code implementation • 23 Aug 2023 • Baijiong Lin, Weisen Jiang, Feiyang Ye, Yu Zhang, Pengguang Chen, Ying-Cong Chen, Shu Liu, James T. Kwok
Multi-task learning (MTL), a learning paradigm to learn multiple related tasks simultaneously, has achieved great success in various fields.
no code implementations • 15 Apr 2023 • Jingyao Li, Pengguang Chen, Shengju Qian, Jiaya Jia
However, existing models easily misidentify input pixels from unseen classes, thus confusing novel classes with semantically-similar ones.
1 code implementation • CVPR 2023 • Jingyao Li, Pengguang Chen, Shaozuo Yu, Zexin He, Shu Liu, Jiaya Jia
The core of out-of-distribution (OOD) detection is to learn the in-distribution (ID) representation, which is distinguishable from OOD samples.
Ranked #12 on Out-of-Distribution Detection on ImageNet-1k vs Places (AUROC metric)
no code implementations • 2 Mar 2022 • Yixin Chen, Zhuotao Tian, Pengguang Chen, Shu Liu, Jiaya Jia
We revisit the one- and two-stage detector distillation tasks and present a simple and efficient semantic-aware framework to fill the gap between them.
1 code implementation • 15 Oct 2021 • Yinpeng Dong, Qi-An Fu, Xiao Yang, Wenzhao Xiang, Tianyu Pang, Hang Su, Jun Zhu, Jiayu Tang, Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Ye Liu, Qilong Zhang, Lianli Gao, Yunrui Yu, Xitong Gao, Zhe Zhao, Daquan Lin, Jiadong Lin, Chuanbiao Song, ZiHao Wang, Zhennan Wu, Yang Guo, Jiequan Cui, Xiaogang Xu, Pengguang Chen
Due to the vulnerability of deep neural networks (DNNs) to adversarial examples, a large number of defense techniques have been proposed to alleviate this problem in recent years.
1 code implementation • ICCV 2021 • Yixin Chen, Pengguang Chen, Shu Liu, LiWei Wang, Jiaya Jia
Effectively structuring deep knowledge plays a pivotal role in transfer from teacher to student, especially in semantic vision tasks.
no code implementations • 30 Aug 2021 • Pengguang Chen, Yixin Chen, Shu Liu, MingChang Yang, Jiaya Jia
We analyze the reason behind this phenomenon, and propose a novel irregular patch embedding module and adaptive patch fusion module to improve the performance.
7 code implementations • CVPR 2021 • Pengguang Chen, Shu Liu, Hengshuang Zhao, Jiaya Jia
Knowledge distillation transfers knowledge from the teacher network to the student one, with the goal of greatly improving the performance of the student network.
Ranked #12 on Knowledge Distillation on CIFAR-100
1 code implementation • CVPR 2021 • Pengguang Chen, Shu Liu, Jiaya Jia
It is even comparable to the contrastive learning methods when only half of training batches are used.
7 code implementations • 13 Jan 2020 • Pengguang Chen, Shu Liu, Hengshuang Zhao, Xingquan Wang, Jiaya Jia
Then we show limitation of existing information dropping algorithms and propose our structured method, which is simple and yet very effective.