no code implementations • 6 May 2024 • Jang Hyun Cho, Boris Ivanovic, Yulong Cao, Edward Schmerling, Yue Wang, Xinshuo Weng, Boyi Li, Yurong You, Philipp Krähenbühl, Yan Wang, Marco Pavone
Our experiments on outdoor benchmarks demonstrate that Cube-LLM significantly outperforms existing baselines by 21. 3 points of AP-BEV on the Talk2Car dataset for 3D grounded reasoning and 17. 7 points on the DriveLM dataset for complex reasoning about driving scenarios, respectively.
no code implementations • 26 Feb 2024 • Fangzhou Wu, Shutong Wu, Yulong Cao, Chaowei Xiao
To evaluate the effectiveness of the proposed methodology, we conducted extensive experiments using 7 plugin-based ChatGPT Web Agents, 8 Web GPTs, and 3 different open-source Web Agents.
no code implementations • 19 Dec 2023 • Wenhao Ding, Yulong Cao, Ding Zhao, Chaowei Xiao, Marco Pavone
Simulation plays a crucial role in the development of autonomous vehicles (AVs) due to the potential risks associated with real-world testing.
no code implementations • 1 Dec 2023 • Yingzi Ma, Yulong Cao, Jiachen Sun, Marco Pavone, Chaowei Xiao
The quest for fully autonomous vehicles (AVs) capable of navigating complex real-world scenarios with human-like understanding and responsiveness.
no code implementations • 23 Oct 2023 • Minkyoung Cho, Yulong Cao, Zixiang Zhou, Z. Morley Mao
Deep neural networks (DNNs) are increasingly integrated into LiDAR (Light Detection and Ranging)-based perception systems for autonomous vehicles (AVs), requiring robust performance under adversarial conditions.
no code implementations • 1 Sep 2023 • Yulong Cao, Boris Ivanovic, Chaowei Xiao, Marco Pavone
This works aims to address this by developing a framework that employs reinforcement learning with human preference (RLHF) to enhance the realism of existing traffic models.
no code implementations • 10 Jun 2023 • Ziyuan Zhong, Davis Rempe, Yuxiao Chen, Boris Ivanovic, Yulong Cao, Danfei Xu, Marco Pavone, Baishakhi Ray
Realistic and controllable traffic simulation is a core capability that is necessary to accelerate autonomous vehicle (AV) development.
no code implementations • 19 Sep 2022 • Yulong Cao, Chaowei Xiao, Anima Anandkumar, Danfei Xu, Marco Pavone
Trajectory prediction is essential for autonomous vehicles (AVs) to plan correct and safe driving behaviors.
no code implementations • 29 Jul 2022 • Yulong Cao, Danfei Xu, Xinshuo Weng, Zhuoqing Mao, Anima Anandkumar, Chaowei Xiao, Marco Pavone
We demonstrate that our method is able to improve the performance by 46% on adversarial data and at the cost of only 3% performance degradation on clean data, compared to the model trained with clean data.
no code implementations • NeurIPS 2021 • Jiachen Sun, Yulong Cao, Christopher B. Choy, Zhiding Yu, Anima Anandkumar, Zhuoqing Morley Mao, Chaowei Xiao
In this paper, we systematically study the impact of various self-supervised learning proxy tasks on different architectures and threat models for 3D point clouds with adversarial training.
no code implementations • 13 Jun 2021 • R. Spencer Hallyburton, Yupei Liu, Yulong Cao, Z. Morley Mao, Miroslav Pajic
Thus, in this work, we perform an analysis of camera-LiDAR fusion, in the AV context, under LiDAR spoofing attacks.
no code implementations • 24 Nov 2020 • Jiachen Sun, Karl Koenig, Yulong Cao, Qi Alfred Chen, Z. Morley Mao
Since adversarial training (AT) is believed as the most robust defense, we present the first in-depth study showing how AT behaves in point cloud classification and identify that the required symmetric function (pooling operation) is paramount to the 3D model's robustness under AT.
no code implementations • 28 Sep 2020 • Jiachen Sun, Karl Koenig, Yulong Cao, Qi Alfred Chen, Zhuoqing Mao
Since adversarial training (AT) is believed to be the most effective defense, we present the first in-depth study showing how AT behaves in point cloud classification and identify that the required symmetric function (pooling operation) is paramount to the model's robustness under AT.
no code implementations • 30 Jun 2020 • Jiachen Sun, Yulong Cao, Qi Alfred Chen, Z. Morley Mao
In this work, we perform the first study to explore the general vulnerability of current LiDAR-based perception architectures and discover that the ignored occlusion patterns in LiDAR point clouds make self-driving cars vulnerable to spoofing attacks.
no code implementations • 16 Jul 2019 • Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu, Z. Morley Mao
In contrast to prior work that concentrates on camera-based perception, in this work we perform the first security study of LiDAR-based perception in AV settings, which is highly important but unexplored.
no code implementations • 11 Jul 2019 • Yulong Cao, Chaowei Xiao, Dawei Yang, Jing Fang, Ruigang Yang, Mingyan Liu, Bo Li
Deep neural networks (DNNs) are found to be vulnerable against adversarial examples, which are carefully crafted inputs with a small magnitude of perturbation aiming to induce arbitrarily incorrect predictions.