1 code implementation • 1 May 2023 • Shota Hattori, Tatsuya Yatagawa, Yutaka Ohtake, Hiromasa Suzuki
In this paper, we present a self-prior-based mesh inpainting framework that requires only an incomplete mesh as input, without the need for any training datasets.
1 code implementation • 5 Mar 2023 • Yuta Tsuji, Tatsuya Yatagawa, Hiroyuki Kubo, Shigeo Morishima
This paper presents an algorithm to obtain an event-based video from noisy frames given by physics-based Monte Carlo path tracing over a synthetic 3D scene.
no code implementations • 14 Jul 2022 • Tatsuya Yatagawa, Yutaka Ohtake, Hiromasa Suzuki
To solve this problem, we consider the estimated rigid transformation as a function of input point clouds and derive its analytic gradients using the implicit function theorem.
1 code implementation • ECCV 2022 • Shota Hattori, Tatsuya Yatagawa, Yutaka Ohtake, Hiromasa Suzuki ;
Compared to the original DIP that transforms a fixed random code into a noise-free image by the neural network, we reproduce vertex displacement from a fixed random code and reproduce facet normals from feature vectors that summarize local triangle arrangements.
1 code implementation • 2 Jul 2021 • Shota Hattori, Tatsuya Yatagawa, Yutaka Ohtake, Hiromasa Suzuki
This paper addresses mesh restoration problems, i. e., denoising and completion, by learning self-similarity in an unsupervised manner.
no code implementations • 30 Nov 2018 • Ryota Natsume, Tatsuya Yatagawa, Shigeo Morishima
We herein represent the face region with a latent variable that is assigned with the proposed deep neural network (DNN) instead of facial textures.
no code implementations • 10 Apr 2018 • Ryota Natsume, Tatsuya Yatagawa, Shigeo Morishima
The proposed network independently handles face and hair appearances in the latent spaces, and then, face swapping is achieved by replacing the latent-space representations of the faces, and reconstruct the entire face image with them.