JVS corpus: free Japanese multi-speaker voice corpus

17 Aug 2019  ·  Takamichi Shinnosuke, Mitsui Kentaro, Saito Yuki, Koriyama Tomoki, Tanji Naoko, Saruwatari Hiroshi ·

Thanks to improvements in machine learning techniques, including deep learning, speech synthesis is becoming a machine learning task. To accelerate speech synthesis research, we are developing Japanese voice corpora reasonably accessible from not only academic institutions but also commercial companies. In 2017, we released the JSUT corpus, which contains 10 hours of reading-style speech uttered by a single speaker, for end-to-end text-to-speech synthesis. For more general use in speech synthesis research, e.g., voice conversion and multi-speaker modeling, in this paper, we construct the JVS corpus, which contains voice data of 100 speakers in three styles (normal, whisper, and falsetto). The corpus contains 30 hours of voice data including 22 hours of parallel normal voices. This paper describes how we designed the corpus and summarizes the specifications. The corpus is available at our project page.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Sound Audio and Speech Processing

Datasets


Introduced in the Paper:

JVS

Used in the Paper:

JSUT Corpus