Search Results for author: Won Jang

Found 5 papers, 2 papers with code

Intelli-Z: Toward Intelligible Zero-Shot TTS

no code implementations25 Jan 2024 Sunghee Jung, Won Jang, Jaesam Yoon, BongWan Kim

Zero-shot TTS demands additional efforts to ensure clear pronunciation and speech quality due to its inherent requirement of replacing a core parameter (speaker embedding or acoustic prompt) with a new one at the inference stage.

FastFit: Towards Real-Time Iterative Neural Vocoder by Replacing U-Net Encoder With Multiple STFTs

no code implementations18 May 2023 Won Jang, Dan Lim, Heayoung Park

This paper presents FastFit, a novel neural vocoder architecture that replaces the U-Net encoder with multiple short-time Fourier transforms (STFTs) to achieve faster generation rates without sacrificing sample quality.

UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation

6 code implementations15 Jun 2021 Won Jang, Dan Lim, Jaesam Yoon, BongWan Kim, Juntae Kim

Using full-band mel-spectrograms as input, we expect to generate high-resolution signals by adding a discriminator that employs spectrograms of multiple resolutions as the input.

Speech Synthesis

Universal MelGAN: A Robust Neural Vocoder for High-Fidelity Waveform Generation in Multiple Domains

2 code implementations19 Nov 2020 Won Jang, Dan Lim, Jaesam Yoon

To preserve sound quality when the MelGAN-based structure is trained with a dataset of hundreds of speakers, we added multi-resolution spectrogram discriminators to sharpen the spectral resolution of the generated waveforms.

JDI-T: Jointly trained Duration Informed Transformer for Text-To-Speech without Explicit Alignment

no code implementations15 May 2020 Dan Lim, Won Jang, Gyeonghwan O, Heayoung Park, Bong-Wan Kim, Jaesam Yoon

We propose Jointly trained Duration Informed Transformer (JDI-T), a feed-forward Transformer with a duration predictor jointly trained without explicit alignments in order to generate an acoustic feature sequence from an input text.

Cannot find the paper you are looking for? You can Submit a new open access paper.