MVTN: Multi-View Transformation Network for 3D Shape Recognition

ICCV 2021  ยท  Abdullah Hamdi, Silvio Giancola, Bernard Ghanem ยท

Multi-view projection methods have demonstrated their ability to reach state-of-the-art performance on 3D shape recognition. Those methods learn different ways to aggregate information from multiple views. However, the camera view-points for those views tend to be heuristically set and fixed for all shapes. To circumvent the lack of dynamism of current multi-view methods, we propose to learn those view-points. In particular, we introduce the Multi-View Transformation Network (MVTN) that regresses optimal view-points for 3D shape recognition, building upon advances in differentiable rendering. As a result, MVTN can be trained end-to-end along with any multi-view network for 3D shape classification. We integrate MVTN in a novel adaptive multi-view pipeline that can render either 3D meshes or point clouds. MVTN exhibits clear performance gains in the tasks of 3D shape classification and 3D shape retrieval without the need for extra training supervision. In these tasks, MVTN achieves state-of-the-art performance on ModelNet40, ShapeNet Core55, and the most recent and realistic ScanObjectNN dataset (up to 6% improvement). Interestingly, we also show that MVTN can provide network robustness against rotation and occlusion in the 3D domain. The code is available at https://github.com/ajhamdi/MVTN .

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Object Retrieval ModelNet40 MVTN Mean AP 92.9 # 1
3D Point Cloud Classification ModelNet40 MVTN Overall Accuracy 93.8 # 32
Mean Accuracy 92.2 # 3
3D Object Retrieval ShapeNetCore 55 MVTN Mean AP 82.9 # 1

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
3D Point Cloud Classification ScanObjectNN MVTN Overall Accuracy 82.8 # 51
OBJ-BG (OA) 92.6 # 12
OBJ-ONLY (OA) 92.3 # 8

Methods


No methods listed for this paper. Add relevant methods here