Pixel Level Segmentation Based Drivable Road Region Detection and Steering Angle Estimation Method for Autonomous Driving on Unstructured Roads

With the recent emergence of deep learning, computer vision-based applications have demonstrated better applicability in accomplishing driving tasks including drivable road region detection, lane keeping and steering control in self-driving cars. Till recently, numerous lane-marking detection based steering control and lane keeping methods have been proposed to perform autonomous driving on urban well-structured roads. But the matter of fact is that these methods are not feasible on roads where lane markings are not available or faded over time which makes drivable road region detection a crucial task. Moreover, it is highly arduous task to estimate steering angle on deteriorated roads using existing road detection and steering angle estimation methods. To the best of our knowledge, there is no standard benchmark available for drivable road region detection and steering angle estimation on unstructured roads. To this end, we present a large-scale dataset for drivable region road detection, comprising of 15,000-pixel level high quality fine annotations. Alongside dataset, we also present an end-to-end drivable road region detection and steering angle estimation method to ensure the autonomous driving on generalized urban, rural, and unstructured road conditions. The proposed method performs pixel-level segmentation to extract drivable road region and quantifies lane interception to estimate the steering angle of self-driving cars. A comprehensive qualitative and quantitative analysis has been carried out to demonstrate the effectiveness of our proposed dataset, road detection and steering angle estimation methods. Our benchmark is available at https://carl-dataset.github.io/index/ .

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here