Learning Self-Prior for Mesh Inpainting Using Self-Supervised Graph Convolutional Networks

1 May 2023  ·  Shota Hattori, Tatsuya Yatagawa, Yutaka Ohtake, Hiromasa Suzuki ·

In this paper, we present a self-prior-based mesh inpainting framework that requires only an incomplete mesh as input, without the need for any training datasets. Additionally, our method maintains the polygonal mesh format throughout the inpainting process without converting the shape format to an intermediate one, such as a voxel grid, a point cloud, or an implicit function, which are typically considered easier for deep neural networks to process. To achieve this goal, we introduce two graph convolutional networks (GCNs): single-resolution GCN (SGCN) and multi-resolution GCN (MGCN), both trained in a self-supervised manner. Our approach refines a watertight mesh obtained from the initial hole filling to generate a complete output mesh. Specifically, we train the GCNs to deform an oversmoothed version of the input mesh into the expected complete shape. The deformation is described by vertex displacements, and the GCNs are supervised to obtain accurate displacements at vertices in real holes. To this end, we specify several connected regions of the mesh as fake holes, thereby generating meshes with various sets of fake holes. The correct displacements of vertices are known in these fake holes, thus enabling training GCNs with loss functions that assess the accuracy of vertex displacements. We demonstrate that our method outperforms traditional dataset-independent approaches and exhibits greater robustness compared with other deep-learning-based methods for shapes that infrequently appear in shape datasets. Our code and test data are available at https://github.com/astaka-pe/SeMIGCN.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods