Paper

Learn to Navigate Maplessly with Varied LiDAR Configurations: A Support Point-Based Approach

Deep reinforcement learning (DRL) demonstrates great potential in mapless navigation domain. However, such a navigation model is normally restricted to a fixed configuration of the range sensor because its input format is fixed. In this paper, we propose a DRL model that can address range data obtained from different range sensors with different installation positions. Our model first extracts the goal-directed features from each obstacle point. Subsequently, it chooses global obstacle features from all point-feature candidates and uses these features for the final decision. As only a few points are used to support the final decision, we refer to these points as support points and our approach as support point-based navigation (SPN). Our model can handle data from different LiDAR setups and demonstrates good performance in simulation and real-world experiments. Moreover, it shows great potential in crowded scenarios with small obstacles when using a high-resolution LiDAR.

Results in Papers With Code
(↓ scroll down to see all results)