GEDepth: Ground Embedding for Monocular Depth Estimation

ICCV 2023  ·  Xiaodong Yang, Zhuang Ma, Zhiyu Ji, Zhe Ren ·

Monocular depth estimation is an ill-posed problem as the same 2D image can be projected from infinite 3D scenes. Although the leading algorithms in this field have reported significant improvement, they are essentially geared to the particular compound of pictorial observations and camera parameters (i.e., intrinsics and extrinsics), strongly limiting their generalizability in real-world scenarios. To cope with this challenge, this paper proposes a novel ground embedding module to decouple camera parameters from pictorial cues, thus promoting the generalization capability. Given camera parameters, the proposed module generates the ground depth, which is stacked with the input image and referenced in the final depth prediction. A ground attention is designed in the module to optimally combine ground depth with residual depth. Our ground embedding is highly flexible and lightweight, leading to a plug-in module that is amenable to be integrated into various depth estimation networks. Experiments reveal that our approach achieves the state-of-the-art results on popular benchmarks, and more importantly, renders significant generalization improvement on a wide range of cross-domain tests.

PDF Abstract ICCV 2023 PDF ICCV 2023 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Monocular Depth Estimation DDAD GEDepth absolute relative error 0.145 # 2
Sq Rel 2.119 # 2
RMSE 10.596 # 2
RMSE log 0.237 # 3
Monocular Depth Estimation KITTI Eigen split GEDepth absolute relative error 0.048 # 9
RMSE 2.044 # 16
Sq Rel 0.142 # 13
RMSE log 0.076 # 15
Delta < 1.25 0.9763 # 17
Delta < 1.25^2 0.9972 # 15
Delta < 1.25^3 0.9993 # 9

Methods


No methods listed for this paper. Add relevant methods here