Model Uncertainty Quantification for Reliable Deep Vision Structural Health Monitoring

10 Apr 2020  ·  Seyed Omid Sajedi, Xiao Liang ·

Computer vision leveraging deep learning has achieved significant success in the last decade. Despite the promising performance of the existing deep models in the recent literature, the extent of models' reliability remains unknown. Structural health monitoring (SHM) is a crucial task for the safety and sustainability of structures, and thus prediction mistakes can have fatal outcomes. This paper proposes Bayesian inference for deep vision SHM models where uncertainty can be quantified using the Monte Carlo dropout sampling. Three independent case studies for cracks, local damage identification, and bridge component detection are investigated using Bayesian inference. Aside from better prediction results, mean class softmax variance and entropy, the two uncertainty metrics, are shown to have good correlations with misclassifications. While the uncertainty metrics can be used to trigger human intervention and potentially improve prediction results, interpretation of uncertainty masks can be challenging. Therefore, surrogate models are introduced to take the uncertainty as input such that the performance can be further boosted. The proposed methodology in this paper can be applied to future deep vision SHM frameworks to incorporate model uncertainty in the inspection processes.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods