Dynamic-SLAM: Semantic monocular visual localization and mapping based on deep learning in dynamic environment

When working in dynamic environment, traditional SLAM framework performs poorly due to interference from dynamic objects. By taking advantages of deep learning in object detection, a semantic simultaneous localization and mapping framework named Dynamic-SLAM is proposed, in order to solve the problem of SLAM in dynamic environment. First, based on the convolutional neural network, an SSD object detector which combines prior knowledge is constructed to detect dynamic objects in the newly detection thread at semantic level. Then, in view of low recall rate of the existing SSD object detection network, a missed detection compensation algorithm based on the speed invariance in adjacent frames is proposed, which greatly improves the recall rate of detection. Finally, a feature-based visual SLAM system is constructed, which processes the feature points of dynamic objects through a selective tracking algorithm in the tracking thread, to significantly reduce the error of pose estimation caused by incorrect matching. The recall rate of the system is increased from 82.3% to 99.8% compared with the original SSD network. Several experiments show that the localization accuracy of Dynamic-SLAM is higher than the state-of-the-art systems. The system successfully localizes and constructs an accurate environmental map in real-world dynamic environment by using a mobile robot. In sum, our experimental demonstrations verify that Dynamic-SLAM shows improved accuracy and robustness in robot localization and mapping comparing to the state-of-the-art SLAM system in dynamic environment.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here