no code implementations • 7 Mar 2024 • Xiyan Fu, Anette Frank
In this paper, we introduce the Continual Compositional Generalization in Inference (C2Gen NLI) challenge, where a model continuously acquires knowledge of constituting primitive inference tasks as a basis for compositional inferences.
no code implementations • 14 Sep 2023 • Xiyan Fu, Anette Frank
Hence, we propose a dynamic modularized reasoning model, MORSE, to improve the compositional generalization of neural models.
1 code implementation • 10 Jun 2023 • Wei Liu, Xiyan Fu, Michael Strube
Coherence is an important aspect of text quality, and various approaches have been applied to coherence modeling.
no code implementations • 24 May 2023 • Xiyan Fu, Anette Frank
We propose SETI (Systematicity Evaluation of Textual Inference), a novel and comprehensive benchmark designed for evaluating pre-trained language models (PLMs) for their systematicity capabilities in the domain of textual inference.
no code implementations • ACL 2021 • Xiyan Fu, Yating Zhang, Tianyi Wang, Xiaozhong Liu, Changlong Sun, Zhenglu Yang
In the field of dialogue summarization, due to the lack of training data, it is often difficult for supervised summary generation methods to learn vital information from dialogue context with limited data.
no code implementations • NAACL 2021 • Xiyan Fu, Jun Wang, Zhenglu Yang
Multimodal summarization becomes increasingly significant as it is the basis for question answering, Web search, and many other downstream tasks.
1 code implementation • ACL 2021 • Wei Liu, Xiyan Fu, Yue Zhang, Wenming Xiao
Lexicon information and pre-trained models, such as BERT, have been combined to explore Chinese sequence labelling tasks due to their respective strengths.
1 code implementation • 17 Sep 2020 • Xiyan Fu, Jun Wang, Zhenglu Yang
Summarization of multimedia data becomes increasingly significant as it is the basis for many real-world applications, such as question answering, Web search, and so forth.