no code implementations • 25 Mar 2024 • Takuhiro Kaneko, Hirokazu Kameoka, Kou Tanaka
A generative adversarial network (GAN)-based vocoder trained with an adversarial discriminator is commonly used for speech synthesis because of its fast, lightweight, and high-quality characteristics.
no code implementations • 21 Mar 2024 • Shogo Sato, Takuhiro Kaneko, Kazuhiko Murasaki, Taiga Yoshida, Ryuichi Tanida, Akisato Kimura
To address this challenge, we propose a novel approach that utilizes only an image during inference while utilizing an image and LiDAR intensity during training.
no code implementations • ICCV 2023 • Takuhiro Kaneko
We propose a multi-input multi-output NeRF (MIMO-NeRF) that reduces the number of MLPs running by replacing the SISO MLP with a MIMO MLP and conducting mappings in a group-wise manner.
no code implementations • 14 Aug 2023 • Takuhiro Kaneko, Hirokazu Kameoka, Kou Tanaka, Shogo Seki
Owing to the difficulty of a 1D CNN to model high-dimensional spectrograms, the frequency dimension is reduced via temporal upsampling.
no code implementations • 24 Mar 2023 • Takuhiro Kaneko, Hirokazu Kameoka, Kou Tanaka, Shogo Seki
This architecture provides a generator with sufficiently rich information for the synthesized speech to be closely matched to the real speech.
1 code implementation • CVPR 2023 • Shogo Sato, Yasuhiro Yao, Taiga Yoshida, Takuhiro Kaneko, Shingo Ando, Jun Shimamura
Intrinsic image decomposition (IID) is the task that decomposes a natural image into albedo and shade.
no code implementations • CVPR 2022 • Takuhiro Kaneko
As an alternative to an AR-GAN, we propose an aperture rendering NeRF (AR-NeRF), which can utilize viewpoint and defocus cues in a unified manner by representing both factors in a common ray-tracing framework.
1 code implementation • 4 Mar 2022 • Takuhiro Kaneko, Kou Tanaka, Hirokazu Kameoka, Shogo Seki
In recent text-to-speech synthesis and voice conversion systems, a mel-spectrogram is commonly applied as an intermediate representation, and the necessity for a mel-spectrogram vocoder is increasing.
no code implementations • CVPR 2021 • Takuhiro Kaneko
Understanding the 3D world from 2D projected natural images is a fundamental challenge in computer vision and graphics.
3 code implementations • 25 Feb 2021 • Takuhiro Kaneko, Hirokazu Kameoka, Kou Tanaka, Nobukatsu Hojo
With FIF, we apply a temporal mask to the input mel-spectrogram and encourage the converter to fill in missing frames based on surrounding frames.
2 code implementations • 22 Oct 2020 • Takuhiro Kaneko, Hirokazu Kameoka, Kou Tanaka, Nobukatsu Hojo
To address this, we examined the applicability of CycleGAN-VC/VC2 to mel-spectrogram conversion.
1 code implementation • 27 Aug 2020 • Hirokazu Kameoka, Takuhiro Kaneko, Kou Tanaka, Nobukatsu Hojo
We previously proposed a method that allows for nonparallel voice conversion (VC) by using a variant of generative adversarial networks (GANs) called StarGAN.
no code implementations • 18 May 2020 • Hirokazu Kameoka, Wen-Chin Huang, Kou Tanaka, Takuhiro Kaneko, Nobukatsu Hojo, Tomoki Toda
The main idea we propose is an extension of the original VTN that can simultaneously learn mappings among multiple speakers.
no code implementations • CVPR 2021 • Takuhiro Kaneko, Tatsuya Harada
However, in contrast to NR-GAN, to address irreversible characteristics, we introduce masking architectures adjusting degradation strength values in a data-driven manner using bypasses before and after degradation.
2 code implementations • CVPR 2020 • Takuhiro Kaneko, Tatsuya Harada
Therefore, we propose distribution and transformation constraints that encourage the noise generator to capture only the noise-specific components.
3 code implementations • 29 Jul 2019 • Takuhiro Kaneko, Hirokazu Kameoka, Kou Tanaka, Nobukatsu Hojo
To bridge this gap, we rethink conditional methods of StarGAN-VC, which are key components for achieving non-parallel multi-domain VC in a single model, and propose an improved variant called StarGAN-VC2.
1 code implementation • 6 May 2019 • Takuhiro Kaneko, Tatsuya Harada
This problem is challenging in terms of scalability because it requires the learning of numerous mappings, the number of which increases proportional to the number of domains.
no code implementations • 9 Apr 2019 • Hirokazu Kameoka, Kou Tanaka, Aaron Valero Puche, Yasunori Ohishi, Takuhiro Kaneko
We use the latent code of an input face image encoded by the face encoder as the auxiliary input into the speech converter and train the speech converter so that the original latent code can be recovered from the generated speech by the voice encoder.
6 code implementations • 9 Apr 2019 • Takuhiro Kaneko, Hirokazu Kameoka, Kou Tanaka, Nobukatsu Hojo
Non-parallel voice conversion (VC) is a technique for learning the mapping from source to target speech without relying on parallel data.
no code implementations • 5 Apr 2019 • Kou Tanaka, Hirokazu Kameoka, Takuhiro Kaneko, Nobukatsu Hojo
WaveCycleGAN has recently been proposed to bridge the gap between natural and synthesized speech waveforms in statistical parametric speech synthesis and provides fast inference with a moving average model rather than an autoregressive model and high-quality speech synthesis with the adversarial training.
2 code implementations • 27 Nov 2018 • Takuhiro Kaneko, Yoshitaka Ushiku, Tatsuya Harada
To overcome this limitation, we address a novel problem called class-distinct and class-mutual image generation, in which the goal is to construct a generator that can capture between-class relationships and generate an image selectively conditioned on the class specificity.
3 code implementations • CVPR 2019 • Takuhiro Kaneko, Yoshitaka Ushiku, Tatsuya Harada
To remedy this, we propose a novel family of GANs called label-noise robust GANs (rGANs), which, by incorporating a noise transition model, can learn a clean label conditional generative distribution even when training labels are noisy.
no code implementations • 9 Nov 2018 • Kou Tanaka, Hirokazu Kameoka, Takuhiro Kaneko, Nobukatsu Hojo
This paper describes a method based on a sequence-to-sequence learning (Seq2Seq) with attention and context preservation mechanism for voice conversion (VC) tasks.
no code implementations • 5 Nov 2018 • Hirokazu Kameoka, Kou Tanaka, Damian Kwasny, Takuhiro Kaneko, Nobukatsu Hojo
Second, it achieves many-to-many conversion by simultaneously learning mappings among multiple speakers using only a single model instead of separately learning mappings between each speaker pair using a different model.
no code implementations • 25 Sep 2018 • Kou Tanaka, Takuhiro Kaneko, Nobukatsu Hojo, Hirokazu Kameoka
The experimental results demonstrate that our proposed method can 1) alleviate the over-smoothing effect of the acoustic features despite the direct modification method used for the waveform and 2) greatly improve the naturalness of the generated speech sounds.
2 code implementations • 13 Aug 2018 • Hirokazu Kameoka, Takuhiro Kaneko, Kou Tanaka, Nobukatsu Hojo
Such situations can be avoided by introducing an auxiliary classifier and training the encoder and decoder so that the attribute classes of the decoder outputs are correctly predicted by the classifier.
13 code implementations • 6 Jun 2018 • Hirokazu Kameoka, Takuhiro Kaneko, Kou Tanaka, Nobukatsu Hojo
This paper proposes a method that allows non-parallel many-to-many voice conversion (VC) by using a variant of a generative adversarial network (GAN) called StarGAN.
no code implementations • CVPR 2018 • Takuhiro Kaneko, Kaoru Hiramatsu, Kunio Kashino
This paper proposes the decision tree latent controller generative adversarial network (DTLC-GAN), an extension of a GAN that can learn hierarchically interpretable representations without relying on detailed supervision.
no code implementations • 6 Apr 2018 • Keisuke Oyamada, Hirokazu Kameoka, Takuhiro Kaneko, Kou Tanaka, Nobukatsu Hojo, Hiroyasu Ando
In this paper, we address the problem of reconstructing a time-domain signal (or a phase spectrogram) solely from a magnitude spectrogram.
9 code implementations • 30 Nov 2017 • Takuhiro Kaneko, Hirokazu Kameoka
A subjective evaluation showed that the quality of the converted speech was comparable to that obtained with a Gaussian mixture model-based method under advantageous conditions with parallel and twice the amount of data.
no code implementations • CVPR 2017 • Takuhiro Kaneko, Kaoru Hiramatsu, Kunio Kashino
This controller is based on a novel generative model called the conditional filtered generative adversarial network (CFGAN), which is an extension of the conventional conditional GAN (CGAN) that incorporates a filtering architecture into the generator input.