1 code implementation • 7 Mar 2024 • Zhiying Zhu, Yiming Yang, Zhiqing Sun
Hallucinations pose a significant challenge to the reliability of large language models (LLMs) in critical domains.
1 code implementation • 30 Jun 2022 • Zhiying Zhu, Weixin Liang, James Zou
Motivated by this, we propose a novel task, dataset explanation.
3 code implementations • ACL 2022 • Jinglin Liu, Chengxi Li, Yi Ren, Zhiying Zhu, Zhou Zhao
Furthermore, we propose a latent-mapping algorithm in the latent space to convert the amateur vocal tone to the professional one.
1 code implementation • 14 Jul 2021 • Jinglin Liu, Zhiying Zhu, Yi Ren, Wencan Huang, Baoxing Huai, Nicholas Yuan, Zhou Zhao
However, the AR decoding manner generates current lip frame conditioned on frames generated previously, which inherently hinders the inference speed, and also has a detrimental effect on the quality of generated lip frames due to error propagation.