Target Detection for Construction Machinery Based on Deep Learning and Multisource Data Fusion

IEEE Sensors Journal 2023  ·  Yong Wang ·

Target detection in use-case environments is a challenging task, which is influenced by complex and dynamic landscapes, illumination, and vibrations. Therefore, this article presents a research on road target detection based on deep learning by combining image data of vision with point cloud data from light detection and ranging (LiDAR). First, the depth map of the point cloud was densified for the sparse and disordered characteristics of the point cloud, with the ground removed to create an image dataset that could serve for training. Next, considering the computa- tional capacity of the on-board processor and the accuracy, the MY3Net network is designed by integrating Mobilenet v2, a lightweight network, as the feature extractor, and you only look once (YOLO) v3, a high-precision network, as the multiscale target detector, to implement the detection of red green blue (RGB) images and the densified depth maps. Finally, a decision-level fusion model is proposed to integrate the detection results of RGB images and depth maps with dynamic weights. Experimental results show that the proposed approach offers high detection accuracy even under complex illumination conditions.

PDF
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here