no code implementations • 19 Feb 2024 • Xiaoyu Tian, Junru Gu, Bailin Li, Yicheng Liu, Chenxu Hu, Yang Wang, Kun Zhan, Peng Jia, Xianpeng Lang, Hang Zhao
We introduce DriveVLM, an autonomous driving system leveraging Vision-Language Models (VLMs) for enhanced scene understanding and planning capabilities.
no code implementations • 9 Aug 2022 • Xin Huang, Xiaoyu Tian, Junru Gu, Qiao Sun, Hang Zhao
Recently, the occupancy flow fields representation was proposed to represent joint future states of road agents through a combination of occupancy grid and flow, which supports efficient and consistent joint predictions.
1 code implementation • CVPR 2023 • Junru Gu, Chenxu Hu, Tianyuan Zhang, Xuanyao Chen, Yilun Wang, Yue Wang, Hang Zhao
In this work, we propose ViP3D, a query-based visual trajectory prediction pipeline that exploits rich information from raw videos to directly predict future trajectories of agents in a scene.
1 code implementation • CVPR 2022 • Qiao Sun, Xin Huang, Junru Gu, Brian C. Williams, Hang Zhao
Predicting future motions of road participants is an important task for driving autonomously in urban scenes.
2 code implementations • ICCV 2021 • Junru Gu, Chen Sun, Hang Zhao
In this work, we propose an anchor-free and end-to-end trajectory prediction model, named DenseTNT, that directly outputs a set of trajectories from dense goal candidates.
1 code implementation • 27 Jun 2021 • Junru Gu, Qiao Sun, Hang Zhao
In autonomous driving, goal-based multi-trajectory prediction methods are proved to be effective recently, where they first score goal candidates, then select a final set of goals, and finally complete trajectories based on the selected goals.