1 code implementation • ICCV 2023 • Zeyu Wang, Dingwen Li, Chenxu Luo, Cihang Xie, Xiaodong Yang
In this work, we propose to boost the representation learning of a multi-camera BEV based student detector by training it to imitate the features of a well-trained LiDAR based teacher detector.
1 code implementation • CVPR 2023 • Jinyu Li, Chenxu Luo, Xiaodong Yang
In order to deal with the sparse and unstructured raw point clouds, LiDAR based 3D object detection research mostly focuses on designing dedicated local point aggregators for fine-grained geometrical modeling.
Ranked #1 on 3D Object Detection on waymo vehicle
3 code implementations • ICCV 2021 • Chenxu Luo, Xiaodong Yang, Alan Yuille
3D multi-object tracking in LiDAR point clouds is a key ingredient for self-driving vehicles.
1 code implementation • CVPR 2021 • Chenxu Luo, Xiaodong Yang, Alan Yuille
Autonomous driving can benefit from motion behavior comprehension when interacting with diverse traffic participants in highly dynamic environments.
no code implementations • 6 Jul 2020 • Chenxu Luo, Lin Sun, Dariush Dabiri, Alan Yuille
As for vehicles, their trajectories are significantly influenced by the lane geometry and how to effectively use the lane information is of active interest.
1 code implementation • ICCV 2019 • Chenxu Luo, Alan Yuille
This decomposition is more parameter-efficient and enables us to quantitatively analyze the contributions of spatial and temporal features in different layers.
1 code implementation • 12 Nov 2018 • Chenxu Luo, Xiao Chu, Alan Yuille
We use limb orientations as a new way to represent 3D poses and bind the orientation together with the bounding box of each limb region to better associate images and predictions.
Ranked #76 on 3D Human Pose Estimation on MPI-INF-3DHP (AUC metric)
1 code implementation • 14 Oct 2018 • Chenxu Luo, Zhenheng Yang, Peng Wang, Yang Wang, Wei Xu, Ram Nevatia, Alan Yuille
Performance on the five tasks of depth estimation, optical flow estimation, odometry, moving object segmentation and scene flow estimation shows that our approach outperforms other SoTA methods.
1 code implementation • 8 Oct 2018 • Yang Wang, Zhenheng Yang, Peng Wang, Yi Yang, Chenxu Luo, Wei Xu
Then the whole scene is decomposed into moving foreground and static background by compar- ing the estimated optical flow and rigid flow derived from the depth and ego-motion.