no code implementations • 6 May 2024 • Jang Hyun Cho, Boris Ivanovic, Yulong Cao, Edward Schmerling, Yue Wang, Xinshuo Weng, Boyi Li, Yurong You, Philipp Krähenbühl, Yan Wang, Marco Pavone
Our experiments on outdoor benchmarks demonstrate that Cube-LLM significantly outperforms existing baselines by 21. 3 points of AP-BEV on the Talk2Car dataset for 3D grounded reasoning and 17. 7 points on the DriveLM dataset for complex reasoning about driving scenarios, respectively.
1 code implementation • 29 Nov 2023 • Jang Hyun Cho, Philipp Krähenbühl
We use this detector to pseudo-label images with image-level labels.
1 code implementation • 23 Jan 2023 • Jang Hyun Cho, Philipp Krähenbühl
Large-scale object detection and instance segmentation face a severe data imbalance.
1 code implementation • CVPR 2023 • Jang Hyun Cho, Philipp Krähenbühl, Vignesh Ramanathan
PartDistillation transfers the part information of an instance segmentation model into a part segmentation model through self-supervised self-training on a large dataset.
1 code implementation • 12 Dec 2022 • Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl
Our detector that trains Deformable-DETR with traditional IoU-based label assignment achieved 50. 2 COCO mAP within 12 epochs (1x schedule) with ResNet50 backbone, outperforming all existing traditional or transformer-based detectors in this setting.
Ranked #2 on Object Detection on COCO-O (using extra training data)
2 code implementations • CVPR 2021 • Jang Hyun Cho, Utkarsh Mall, Kavita Bala, Bharath Hariharan
With our novel learning objective, our framework can learn high-level semantic concepts.
Ranked #3 on Unsupervised Semantic Segmentation on COCO-Stuff-171
no code implementations • ICCV 2019 • Jang Hyun Cho, Bharath Hariharan
In this paper, we present a thorough evaluation of the efficacy of knowledge distillation and its dependence on student and teacher architectures.