1 code implementation • IEEE/CAA Journal of Automatica Sinica 2024 • Zhiming Zhang, Shangce Gao, Mengchu Zhou, Mengtao Yan, Shuyang Cao
In our experiments, MSU extracts one point from a velocity field containing 121 points and utilizes this point to accurately predict 100 pressure points on the cylinder.
no code implementations • 24 May 2023 • Shuyang Cao, Lu Wang
Long document summarization systems are critical for domains with lengthy and jargonladen text, yet they present significant challenges to researchers and developers with limited computing resources.
1 code implementation • 20 Dec 2022 • Liang Ma, Shuyang Cao, Robert L. Logan IV, Di Lu, Shihao Ran, Ke Zhang, Joel Tetreault, Alejandro Jaimes
The proliferation of automatic faithfulness metrics for summarization has produced a need for benchmarks to evaluate them.
no code implementations • 3 Nov 2022 • Shuyang Cao, Lu Wang
Despite having less performance drop when testing on data drawn from a later time, linear prompts focus more on non-temporal information and are less sensitive to the given timestamps, according to human evaluations and sensitivity analyses.
no code implementations • ACL 2022 • Shuyang Cao, Lu Wang
In this work, we present HIBRIDS, which injects Hierarchical Biases foR Incorporating Document Structure into the calculation of attention scores.
3 code implementations • EMNLP 2021 • Shuyang Cao, Lu Wang
We study generating abstractive summaries that are faithful and factually consistent with the given articles.
1 code implementation • ACL 2021 • Shuyang Cao, Lu Wang
We first define a new question type ontology which differentiates the nuanced nature of questions better than widely used question words.
no code implementations • NAACL 2021 • Shuyang Cao, Lu Wang
Using attention head masking, we are able to reveal the relation between encoder-decoder attentions and content selection behaviors of summarization models.
1 code implementation • NAACL 2021 • Luyang Huang, Shuyang Cao, Nikolaus Parulian, Heng Ji, Lu Wang
The quadratic computational and memory complexities of large Transformers have limited their scalability for long document summarization.
no code implementations • NAACL 2021 • Shuyang Cao, Lu Wang
How to generate summaries of different styles without requiring corpora in the target styles, or training separate models?
no code implementations • 24 Sep 2018 • Shuyang Cao, Xipeng Qiu, Xuanjing Huang
Neural architecture for named entity recognition has achieved great success in the field of natural language processing.