V2X-Lead: LiDAR-based End-to-End Autonomous Driving with Vehicle-to-Everything Communication Integration

26 Sep 2023  ·  Zhiyun Deng, Yanjun Shi, Weiming Shen ·

This paper presents a LiDAR-based end-to-end autonomous driving method with Vehicle-to-Everything (V2X) communication integration, termed V2X-Lead, to address the challenges of navigating unregulated urban scenarios under mixed-autonomy traffic conditions. The proposed method aims to handle imperfect partial observations by fusing the onboard LiDAR sensor and V2X communication data. A model-free and off-policy deep reinforcement learning (DRL) algorithm is employed to train the driving agent, which incorporates a carefully designed reward function and multi-task learning technique to enhance generalization across diverse driving tasks and scenarios. Experimental results demonstrate the effectiveness of the proposed approach in improving safety and efficiency in the task of traversing unsignalized intersections in mixed-autonomy traffic, and its generalizability to previously unseen scenarios, such as roundabouts. The integration of V2X communication offers a significant data source for autonomous vehicles (AVs) to perceive their surroundings beyond onboard sensors, resulting in a more accurate and comprehensive perception of the driving environment and more safe and robust driving behavior.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here