no code implementations • 16 May 2020 • Tao Tu, Yuan-Jui Chen, Alexander H. Liu, Hung-Yi Lee
The experiment results demonstrate that with only an hour of paired speech data, no matter the paired data is from multiple speakers or a single speaker, the proposed model can generate intelligible speech in different voices.
no code implementations • 26 Oct 2019 • Jui-Yang Hsu, Yuan-Jui Chen, Hung-Yi Lee
In this paper, we proposed to apply meta learning approach for low-resource automatic speech recognition (ASR).
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 13 Apr 2019 • Tao Tu, Yuan-Jui Chen, Cheng-chieh Yeh, Hung-Yi Lee
In this paper, we aim to build TTS systems for such low-resource (target) languages where only very limited paired data are available.