1 code implementation • 14 Apr 2024 • Jiang Li, Xiangdong Su, Yeyun Gong, Guanglai Gao
Recent studies have highlighted the effectiveness of tensor decomposition methods in the Temporal Knowledge Graphs Embedding (TKGE) task.
1 code implementation • 18 Mar 2024 • Yi Luo, Zhenghao Lin, Yuhao Zhang, Jiashuo Sun, Chen Lin, Chengjin Xu, Xiangdong Su, Yelong Shen, Jian Guo, Yeyun Gong
Subsequently, the retrieval model correlates new inputs with relevant guidelines, which guide LLMs in response generation to ensure safe and high-quality outputs, thereby aligning with human values.
no code implementations • 15 Aug 2023 • Daobin Zhu, Xiangdong Su, Hongbin Zhang
Connectionist temporal classification (CTC) and attention-based encoder decoder (AED) joint training has been widely applied in automatic speech recognition (ASR).
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
2 code implementations • 26 Jun 2023 • Jiang Li, Xiangdong Su, Fujun Zhang, Guanglai Gao
This paper presents a translation-based knowledge geraph embedding method via efficient relation rotation (TransERR), a straightforward yet effective alternative to traditional translation-based knowledge graph embedding models.
Ranked #16 on Link Property Prediction on ogbl-wikikg2
2 code implementations • 14 Dec 2022 • Jiashuo Sun, Hang Zhang, Chen Lin, Xiangdong Su, Yeyun Gong, Jian Guo
For the retriever, we adopt a number-aware negative sampling strategy to enable the retriever to be more discriminative on key numerical facts.
Ranked #1 on Conversational Question Answering on ConvFinQA
no code implementations • 30 Mar 2022 • Zhenhao Jin, Xiang Hao, Xiangdong Su
This paper formulates the speech separation with the unknown number of speakers as a multi-pass source extraction problem and proposes a coarse-to-fine recursive speech separation method.
no code implementations • COLING 2020 • Na Liu, Xiangdong Su, Haoran Zhang, Guanglai Gao, Feilong Bao
The inner-word encoder uses the self-attention mechanisms to capture the inner-word features of the target word.
6 code implementations • 29 Oct 2020 • Xiang Hao, Xiangdong Su, Radu Horaud, Xiaofei Li
In our proposed FullSubNet, we connect a pure full-band model and a pure sub-band model sequentially and use practical joint training to integrate these two types of models' advantages.
no code implementations • 29 Oct 2020 • Xiang Hao, Xiangdong Su, Zhiyu Wang, HUI ZHANG, Batushiren
This approach consists of a generator network and a discriminator network, which operate directly in the time domain.
no code implementations • 11 Jun 2020 • Huali Xu, Xiangdong Su, Meng Wang, Xiang Hao, Guanglai Gao
The mask shrinking strategy is employed in the image completion model to track the areas to be repaired.
no code implementations • 29 May 2020 • Xiang Hao, Xiangdong Su, Zhiyu Wang, Qiang Zhang, Huali Xu, Guanglai Gao
Specifically, this method consists of multiple teacher models and a student model.
no code implementations • 29 May 2020 • Xiang Hao, Shixue Wen, Xiangdong Su, Yun Liu, Guanglai Gao, Xiaofei Li
In single-channel speech enhancement, methods based on full-band spectral features have been widely studied.