Search Results for author: Helong Zhou

Found 10 papers, 7 papers with code

VAD: Vectorized Scene Representation for Efficient Autonomous Driving

2 code implementations ICCV 2023 Bo Jiang, Shaoyu Chen, Qing Xu, Bencheng Liao, Jiajie Chen, Helong Zhou, Qian Zhang, Wenyu Liu, Chang Huang, Xinggang Wang

In this paper, we propose VAD, an end-to-end vectorized paradigm for autonomous driving, which models the driving scene as a fully vectorized representation.

Autonomous Driving Trajectory Planning

MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition

1 code implementation11 Aug 2022 Chuanguang Yang, Zhulin An, Helong Zhou, Linhang Cai, Xiang Zhi, Jiwen Wu, Yongjun Xu, Qian Zhang

MixSKD mutually distills feature maps and probability distributions between the random pair of original images and their mixup images in a meaningful way.

Data Augmentation Image Classification +5

Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition

2 code implementations23 Jul 2022 Chuanguang Yang, Zhulin An, Helong Zhou, Fuzhen Zhuang, Yongjun Xu, Qian Zhan

This enables each network to learn extra contrastive knowledge from others, leading to better feature representations, thus improving the performance of visual recognition tasks.

Contrastive Learning Image Classification +3

HOPE: Hierarchical Spatial-temporal Network for Occupancy Flow Prediction

no code implementations21 Jun 2022 Yihan Hu, Wenxin Shao, Bo Jiang, Jiajie Chen, Siqi Chai, Zhening Yang, Jingyu Qian, Helong Zhou, Qiang Liu

In this report, we introduce our solution to the Occupancy and Flow Prediction challenge in the Waymo Open Dataset Challenges at CVPR 2022, which ranks 1st on the leaderboard.

Decoder

Cross-Image Relational Knowledge Distillation for Semantic Segmentation

1 code implementation CVPR 2022 Chuanguang Yang, Helong Zhou, Zhulin An, Xue Jiang, Yongjun Xu, Qian Zhang

Current Knowledge Distillation (KD) methods for semantic segmentation often guide the student to mimic the teacher's structured information generated from individual data samples.

Knowledge Distillation Segmentation +1

Leveraging Unimodal Self-Supervised Learning for Multimodal Audio-Visual Speech Recognition

1 code implementation ACL 2022 Xichen Pan, Peiyu Chen, Yichen Gong, Helong Zhou, Xinbing Wang, Zhouhan Lin

In particular, audio and visual front-ends are trained on large-scale unimodal datasets, then we integrate components of both front-ends into a larger multimodal framework which learns to recognize parallel audio-visual data into characters through a combination of CTC and seq2seq decoding.

Audio-Visual Speech Recognition Automatic Speech Recognition (ASR) +7

Cannot find the paper you are looking for? You can Submit a new open access paper.