no code implementations • 8 Dec 2023 • Zhixin Guo, Jianping Zhou, Jiexing Qi, Mingxuan Yan, Ziwei He, Guanjie Zheng, Zhouhan Lin, Xinbing Wang, Chenghu Zhou
The sheer volume of scientific experimental results and complex technical statements, often presented in tabular formats, presents a formidable barrier to individuals acquiring preferred information.
1 code implementation • 13 Nov 2023 • Ziwei He, Jian Yuan, Le Zhou, Jingwen Leng, Bo Jiang
The quadratic complexity of self-attention in Transformers has hindered the processing of long text.
1 code implementation • 24 May 2023 • Ziwei He, Meng Yang, Minwei Feng, Jingcheng Yin, Xinbing Wang, Jingwen Leng, Zhouhan Lin
Many researchers have focused on designing new forms of self-attention or introducing new parameters to overcome this limitation, however a large portion of them prohibits the model to inherit weights from large pretrained models.
Ranked #1 on Open-Domain Question Answering on ELI5
1 code implementation • 24 Feb 2023 • Zhixin Guo, Minyxuan Yan, Jiexing Qi, Jianping Zhou, Ziwei He, Guanjie Zheng, Xinbing Wang
Pretrained language models (PLMs) have made remarkable progress in table-to-text generation tasks.
no code implementations • 9 Feb 2023 • Zhixin Guo, Minyxuan Yan, Jiexing Qi, Jianping Zhou, Ziwei He, Zhouhan Lin, Guanjie Zheng, Xinbing Wang
The design of our framework consists of two aspects: a prompt planner and a knowledge adapter.
1 code implementation • 14 May 2022 • Jiexing Qi, Jingyao Tang, Ziwei He, Xiangpeng Wan, Yu Cheng, Chenghu Zhou, Xinbing Wang, Quanshi Zhang, Zhouhan Lin
Our model can incorporate almost all types of existing relations in the literature, and in addition, we propose introducing co-reference relations for the multi-turn scenario.
Ranked #1 on Dialogue State Tracking on CoSQL