no code implementations • NAACL (ACL) 2022 • Rui Zhang, Yangfeng Ji, Yue Zhang, Rebecca J. Passonneau
We then survey the benefits and the best practices of contrastive learning for various downstream NLP applications including Text Classification, Question Answering, Summarization, Text Generation, Interpretability and Explainability, Commonsense Knowledge and Reasoning, Vision-and-Language. This tutorial intends to help researchers in the NLP and computational linguistics community to understand this emerging topic and promote future research directions of using contrastive learning for NLP applications.
no code implementations • EMNLP 2021 • Sixuan Wu, Jian Li, Peng Zhang, Yue Zhang
Recent research has investigated quantum NLP, designing algorithms that process natural language in quantum computers, and also quantum-inspired algorithms that improve NLP performance on classical computers.
no code implementations • CCL 2020 • Shuailong Liang, Derek F. Wong, Yue Zhang
我们基于从2020年1月22日至2020年4月30日在推特社交平台上抓取的不同国家和地区发布的50万条推文, 研究了有关 2019新型冠状病毒肺炎相关的主题和人们的观点, 发现了不同国家之间推特用户的普遍关切和看法之间存在着异同, 并且对不同议题的情感态度也有所不同。我们发现大部分推文中包含了强烈的情感, 其中表达爱与支持的推文比较普遍。总体来看, 人们的情感随着时间的推移逐渐正向增强。
no code implementations • CCL 2020 • Meishan Zhang, Yue Zhang
Recent advances of multilingual word representations weaken the input divergences across languages, making cross-lingual transfer similar to the monolingual cross-domain and semi-supervised settings.
no code implementations • COLING (CogALex) 2020 • Lu Cao, Yulong Chen, Dandan Huang, Yue Zhang
Functional Magnetic Resonance Imaging (fMRI) provides a means to investigate human conceptual representation in cognitive and neuroscience studies, where researchers predict the fMRI activations with elicited stimuli inputs.
2 code implementations • Findings (ACL) 2022 • Sen yang, Leyang Cui, Ruoxi Ning, Di wu, Yue Zhang
Neural constituency parsers have reached practical performance on news-domain benchmarks.
1 code implementation • ACL 2022 • Yue Zhang, Parisa Kordjamshidi
In this paper, we investigate the problem of vision and language navigation.
no code implementations • EMNLP (sustainlp) 2021 • Yue Zhang, ChengCheng Hu, Yuqi Liu, Hui Fang, Jimmy Lin
It is well known that rerankers built on pretrained transformer models such as BERT have dramatically improved retrieval effectiveness in many tasks.
no code implementations • INLG (ACL) 2021 • Yulong Chen, Yang Liu, Yue Zhang
We propose a shared task on summarizing real-life scenario dialogues, DialogSum Challenge, to encourage researchers to address challenges in dialogue summarization, which has been less studied by the summarization community.
1 code implementation • ACL 2022 • Chenhua Chen, Zhiyang Teng, Zhongqing Wang, Yue Zhang
Dependency trees have been intensively used with graph neural networks for aspect-based sentiment classification.
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +1
no code implementations • EMNLP 2020 • Chen Jia, Yuefeng Shi, Qinrong Yang, Yue Zhang
We then integrate the entity information into BERT using Char-Entity-Transformer, which augments the self-attention using a combination of character and entity representations.
no code implementations • Findings (NAACL) 2022 • Yue Zhang, Hongliang Fei, Dingcheng Li, Ping Li
Recently, prompt learning has received significant attention, where the downstream tasks are reformulated to the mask-filling task with the help of a textual prompt.
no code implementations • EMNLP 2020 • Chenhua Chen, Zhiyang Teng, Yue Zhang
Aspect-level sentiment analysis aims to recognize the sentiment polarity of an aspect or a target in a comment.
1 code implementation • COLING 2022 • Kaixin Wu, Yue Zhang, Bojie Hu, Tong Zhang
Extensive experiments on ten WMT machine translation tasks show that the proposed model yields an average of 1. 35x faster (with almost no decrease in BLEU) over the state-of-the-art inference implementation.
1 code implementation • Findings (ACL) 2022 • Yafu Li, Yongjing Yin, Jing Li, Yue Zhang
Neural machine translation (NMT) has obtained significant performance improvement over the recent years.
no code implementations • 28 Apr 2024 • Hanmeng Liu, Zhiyang Teng, Chaoli Zhang, Yue Zhang
Chain-of-Thought (CoT) prompting has emerged as a pivotal technique for augmenting the inferential capabilities of language models during reasoning tasks.
no code implementations • 25 Apr 2024 • Runzhe Zhan, Xinyi Yang, Derek F. Wong, Lidia S. Chao, Yue Zhang
While supervised fine-tuning (SFT) has been a straightforward approach for tailoring the output of foundation large language model (LLM) to specific preferences, concerns have been raised about the depth of this alignment, with some critiques suggesting it is merely "superficial".
no code implementations • 22 Apr 2024 • Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, Alon Benhaim, Misha Bilenko, Johan Bjorck, Sébastien Bubeck, Martin Cai, Caio César Teodoro Mendes, Weizhu Chen, Vishrav Chaudhary, Parul Chopra, Allie Del Giorno, Gustavo de Rosa, Matthew Dixon, Ronen Eldan, Dan Iter, Amit Garg, Abhishek Goswami, Suriya Gunasekar, Emman Haider, Junheng Hao, Russell J. Hewett, Jamie Huynh, Mojan Javaheripi, Xin Jin, Piero Kauffmann, Nikos Karampatziakis, Dongwoo Kim, Mahoud Khademi, Lev Kurilenko, James R. Lee, Yin Tat Lee, Yuanzhi Li, Chen Liang, Weishung Liu, Eric Lin, Zeqi Lin, Piyush Madan, Arindam Mitra, Hardik Modi, Anh Nguyen, Brandon Norick, Barun Patra, Daniel Perez-Becker, Thomas Portet, Reid Pryzant, Heyang Qin, Marko Radmilac, Corby Rosset, Sambudha Roy, Olatunji Ruwase, Olli Saarikivi, Amin Saied, Adil Salim, Michael Santacroce, Shital Shah, Ning Shang, Hiteshi Sharma, Xia Song, Masahiro Tanaka, Xin Wang, Rachel Ward, Guanhua Wang, Philipp Witte, Michael Wyatt, Can Xu, Jiahang Xu, Sonali Yadav, Fan Yang, ZiYi Yang, Donghan Yu, Chengruidong Zhang, Cyril Zhang, Jianwen Zhang, Li Lyna Zhang, Yi Zhang, Yue Zhang, Yunan Zhang, Xiren Zhou
We introduce phi-3-mini, a 3. 8 billion parameter language model trained on 3. 3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3. 5 (e. g., phi-3-mini achieves 69% on MMLU and 8. 38 on MT-bench), despite being small enough to be deployed on a phone.
1 code implementation • 18 Apr 2024 • Fang Guo, Wenyu Li, Honglei Zhuang, Yun Luo, Yafu Li, Le Yan, Yue Zhang
The most recent pointwise Large Language Model (LLM) rankers have achieved remarkable ranking results.
no code implementations • 15 Apr 2024 • Yifei Yu, Shaocong Wang, Woyu Zhang, Xinyuan Zhang, Xiuzhe Wu, Yangu He, Jichang Yang, Yue Zhang, Ning Lin, Bo wang, Xi Chen, Songqi Wang, Xumeng Zhang, Xiaojuan Qi, Zhongrui Wang, Dashan Shang, Qi Liu, Kwang-Ting Cheng, Ming Liu
The GE harnesses the intrinsic stochasticity of resistive memory for efficient input encoding, while the PE achieves precise weight mapping through a Hardware-Aware Quantization (HAQ) circuit.
no code implementations • 10 Apr 2024 • Yongqiang Ma, Lizhi Qing, Jiawei Liu, Yangyang Kang, Yue Zhang, Wei Lu, Xiaozhong Liu, Qikai Cheng
Therefore, our study shifts the focus from model-centered to human-centered evaluation in the context of AI-powered writing assistance applications.
2 code implementations • 9 Apr 2024 • Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang
The rapid development of large language model (LLM) evaluation methodologies and datasets has led to a profound challenge: integrating state-of-the-art evaluation techniques cost-effectively while ensuring reliability, reproducibility, and efficiency.
no code implementations • 4 Apr 2024 • Jiawei Li, Yue Zhang
We conclude that the BERT base model will be improved by incorporating the features.
1 code implementation • 2 Apr 2024 • Bowen Ding, Qingkai Min, Shengkun Ma, Yingjie Li, Linyi Yang, Yue Zhang
Based on Pre-trained Language Models (PLMs), event coreference resolution (ECR) systems have demonstrated outstanding performance in clustering coreferential events across documents.
no code implementations • 31 Mar 2024 • Yue Zhang, Yuntian He, Saket Gurukar, Srinivasan Parthasarathy
To address this issue, we propose a Multi-Level Embedding framework of nodes on a heterogeneous graph (HeteroMILE) - a generic methodology that allows contemporary graph embedding methods to scale to large graphs.
1 code implementation • 18 Mar 2024 • Cunxiang Wang, Ruoxi Ning, Boqi Pan, Tonghui Wu, Qipeng Guo, Cheng Deng, Guangsheng Bao, Qian Wang, Yue Zhang
The rapid advancement of Large Language Models (LLMs) has introduced a new frontier in natural language processing, particularly in understanding and processing long-context information.
no code implementations • 13 Mar 2024 • Rongwu Xu, Zehan Qi, Cunxiang Wang, Hongru Wang, Yue Zhang, Wei Xu
This survey provides an in-depth analysis of knowledge conflicts for large language models (LLMs), highlighting the complex challenges they encounter when blending contextual and parametric knowledge.
no code implementations • 8 Mar 2024 • Ziqi Gao, Yue Zhang, Xinwen Liu, Kaiyan Li, S. Kevin Zhou
Multi-contrast (MC) Magnetic Resonance Imaging (MRI) reconstruction aims to incorporate a reference image of auxiliary modality to guide the reconstruction process of the target modality.
no code implementations • 3 Mar 2024 • Mieradilijiang Maimaiti, Yuanhang Zheng, Ji Zhang, Fei Huang, Yue Zhang, Wenpei Luo, Kaiyu Huang
Semantic Retrieval (SR) has become an indispensable part of the FAQ system in the task-oriented question-answering (QA) dialogue scenario.
1 code implementation • 25 Feb 2024 • Guangsheng Bao, Hongbo Zhang, Linyi Yang, Cunxiang Wang, Yue Zhang
We further examine the factors influencing the causal structure of the implied SCM, revealing that in-context learning, supervised fine-tuning, and reinforcement learning on human feedback significantly impact the causal relations.
2 code implementations • 23 Feb 2024 • Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Wei Ye, Jindong Wang, Xing Xie, Yue Zhang, Shikun Zhang
Automatic evaluation methods for large language models (LLMs) are hindered by data contamination, leading to inflated assessments of their effectiveness.
no code implementations • 22 Feb 2024 • Zhiyuan Wang, Jinhao Duan, Chenxi Yuan, Qingyu Chen, Tianlong Chen, Huaxiu Yao, Yue Zhang, Ren Wang, Kaidi Xu, Xiaoshuang Shi
Uncertainty estimation plays a pivotal role in ensuring the reliability of safety-critical human-AI interaction systems, particularly in the medical domain.
1 code implementation • 21 Feb 2024 • Jianhao Yan, Yun Luo, Yue Zhang
The application scope of large language models (LLMs) is increasingly expanding.
no code implementations • 21 Feb 2024 • Jianhao Yan, Futing Wang, Yafu Li, Yue Zhang
Large language models (LLMs) trained on vast corpora suffer from inevitable stereotype biases.
no code implementations • 20 Feb 2024 • Hanchen Xia, Feng Jiang, Naihao Deng, Cunxiang Wang, Guojiang Zhao, Rada Mihalcea, Yue Zhang
Modern LLMs have become increasingly powerful, but they are still facing challenges in specialized tasks such as Text-to-SQL.
no code implementations • 19 Feb 2024 • Jian Wu, Linyi Yang, Manabu Okumura, Yue Zhang
Although Large Language Models (LLMs) have shown strong performance in Multi-hop Question Answering (MHQA) tasks, their real reasoning ability remains exploration.
no code implementations • 19 Feb 2024 • Naihao Deng, Zhenjie Sun, Ruiqi He, Aman Sikka, Yulong Chen, Lin Ma, Yue Zhang, Rada Mihalcea
In this paper, we investigate the effectiveness of various LLMs in interpreting tabular data through different prompting strategies and data formats.
no code implementations • 18 Feb 2024 • Liqiang Jing, Jingxuan Zuo, Yue Zhang
To evaluate the factuality of multimodal summarization models, we propose two fine-grained and explainable evaluation frameworks (FALLACIOUS) for different application scenarios, i. e. reference-based factuality evaluation framework and reference-free factuality evaluation framework.
1 code implementation • 4 Feb 2024 • Yue Zhang, Quan Guo, Parisa Kordjamshidi
The hint generator assists the navigation agent in developing a global understanding of the visual environment.
no code implementations • 31 Jan 2024 • Yue Zhang, Ben Colman, Ali Shahriyari, Gaurav Bharaj
State-of-the-art approaches rely on image-based features extracted via neural networks for the deepfake detection binary classification.
1 code implementation • 22 Jan 2024 • Li Lin, Neeraj Gupta, Yue Zhang, Hainan Ren, Chun-Hao Liu, Feng Ding, Xin Wang, Xin Li, Luisa Verdoliva, Shu Hu
The rapid advancement of Large AI Models (LAIMs), particularly diffusion models and large language models, has marked a new era where AI-generated multimedia is increasingly integrated into various aspects of daily life.
no code implementations • 6 Jan 2024 • Kaiyan Li, Jingyuan Yang, Wenxuan Liang, Xingde Li, Chenxi Zhang, Lulu Chen, Chan Wu, Xiao Zhang, Zhiyan Xu, Yuelin Wang, Lihui Meng, Yue Zhang, Youxin Chen, S. Kevin Zhou
Optical coherence tomography (OCT) is a noninvasive technology that enables real-time imaging of tissue microanatomies.
no code implementations • 3 Jan 2024 • Enbo He, Yitong Hao, Yue Zhang, Guisheng Yin, Lina Yao
Besides, the node representation of normal entities can be perturbed easily by the noise relationships introduced by anomalous nodes.
3 code implementations • 27 Dec 2023 • Zijie Yang, Yongjing Yin, Chaojun Kong, Tiange Chi, Wufan Tao, Yue Zhang, Tian Xu
Natural Medicinal Materials (NMMs) have a long history of global clinical applications, accompanied by extensive informational records.
1 code implementation • 26 Dec 2023 • Linyi Yang, Shuibai Zhang, Zhuohao Yu, Guangsheng Bao, Yidong Wang, Jindong Wang, Ruochen Xu, Wei Ye, Xing Xie, Weizhu Chen, Yue Zhang
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
2 code implementations • 25 Dec 2023 • Yue Zhang, Leyang Cui, Wei Bi, Shuming Shi
Experimental results on both discrimination-based and generation-based hallucination evaluation benchmarks, such as TruthfulQA and \textsc{FActScore}, demonstrate that our proposed ICD methods can effectively enhance the factuality of LLMs across various model sizes and families.
no code implementations • 14 Dec 2023 • Shaocong Wang, Yizhao Gao, Yi Li, Woyu Zhang, Yifei Yu, Bo wang, Ning Lin, Hegan Chen, Yue Zhang, Yang Jiang, Dingchen Wang, Jia Chen, Peng Dai, Hao Jiang, Peng Lin, Xumeng Zhang, Xiaojuan Qi, Xiaoxin Xu, Hayden So, Zhongrui Wang, Dashan Shang, Qi Liu, Kwang-Ting Cheng, Ming Liu
Our random resistive memory-based deep extreme point learning machine may pave the way for energy-efficient and training-friendly edge AI across various data modalities and tasks.
no code implementations • 12 Dec 2023 • Yue Zhang, Ming Zhang, Haipeng Yuan, Shichun Liu, Yongyao Shi, Tao Gui, Qi Zhang, Xuanjing Huang
The three crucial questions for LLM evaluation are ``what, where, and how to evaluate''.
no code implementations • 4 Dec 2023 • Yifan Yao, Jinhao Duan, Kaidi Xu, Yuanfang Cai, Zhibo Sun, Yue Zhang
In the meantime, LLMs have also gained traction in the security community, revealing security vulnerabilities and showcasing their potential in security-related tasks.
1 code implementation • 22 Nov 2023 • Tianhang Zhang, Lin Qiu, Qipeng Guo, Cheng Deng, Yue Zhang, Zheng Zhang, Chenghu Zhou, Xinbing Wang, Luoyi Fu
Large Language Models (LLMs) have gained significant popularity for their impressive performance across diverse fields.
no code implementations • 15 Nov 2023 • Libo Qin, Wenbo Pan, Qiguang Chen, Lizi Liao, Zhou Yu, Yue Zhang, Wanxiang Che, Min Li
End-to-end task-oriented dialogue (EToD) can directly generate responses in an end-to-end fashion without modular training, which attracts escalating popularity.
no code implementations • 11 Nov 2023 • Yue Zhang
Developmental bias plays a major role in phenotypic evolution.
no code implementations • 9 Nov 2023 • Xuhui Ding, Yue Zhang, Gaoyang Li, Neng Ye, Yuting Guo, Takuya Mabuchi, Hitomi Anzai, Kai Yang
The precise classification of jamming signals holds paramount significance in the effective implementation of anti-jamming strategies within communication systems subject to intricate environmental variables.
1 code implementation • 5 Nov 2023 • Jianling Li, Meishan Zhang, Peiming Guo, Min Zhang, Yue Zhang
Our experimental results demonstrate that self-training for constituency parsing, equipped with an LLM, outperforms traditional methods regardless of the LLM's performance.
1 code implementation • 30 Oct 2023 • Chiyu Song, Zhanchao Zhou, Jianhao Yan, Yuejiao Fei, Zhenzhong Lan, Yue Zhang
Instruction tuning is a burgeoning method to elicit the general intelligence of Large Language Models (LLMs).
no code implementations • 30 Oct 2023 • Xuefeng Bai, Jialong Wu, Yulong Chen, Zhongqing Wang, Yue Zhang
Constituency parsing is a fundamental yet unsolved natural language processing task.
1 code implementation • 24 Oct 2023 • Haofei Yu, Cunxiang Wang, Yue Zhang, Wei Bi
The Transformer architecture is crucial for numerous AI models, but it still faces challenges in long-range language modeling.
1 code implementation • 23 Oct 2023 • Tengxiao Liu, Qipeng Guo, Yuqing Yang, Xiangkun Hu, Yue Zhang, Xipeng Qiu, Zheng Zhang
As large language models (LLMs) have shown effectiveness with different prompting methods, such as Chain of Thought, Program of Thought, we find that these methods have formed a great complementarity to each other on math reasoning tasks.
1 code implementation • 19 Oct 2023 • Cheng Jiayang, Lin Qiu, Tsz Ho Chan, Tianqing Fang, Weiqi Wang, Chunkit Chan, Dongyu Ru, Qipeng Guo, Hongming Zhang, Yangqiu Song, Yue Zhang, Zheng Zhang
Analogy-making between narratives is crucial for human reasoning.
1 code implementation • 13 Oct 2023 • Hanmeng Liu, Zhiyang Teng, Ruoxi Ning, Jian Liu, Qiji Zhou, Yue Zhang
Recently, large language models (LLMs), including notable models such as GPT-4 and burgeoning community models, have showcased significant general language understanding abilities.
1 code implementation • 11 Oct 2023 • Cunxiang Wang, Xiaoze Liu, Yuanhao Yue, Xiangru Tang, Tianhang Zhang, Cheng Jiayang, Yunzhi Yao, Wenyang Gao, Xuming Hu, Zehan Qi, Yidong Wang, Linyi Yang, Jindong Wang, Xing Xie, Zheng Zhang, Yue Zhang
This survey addresses the crucial issue of factuality in Large Language Models (LLMs).
1 code implementation • 11 Oct 2023 • Yue Zhang, Leyang Cui, Enbo Zhao, Wei Bi, Shuming Shi
In this paper, we introduce RobustGEC, a benchmark designed to evaluate the context robustness of GEC systems.
1 code implementation • 11 Oct 2023 • Yu Zhang, Yue Zhang, Leyang Cui, Guohong Fu
In this work, we propose a novel non-autoregressive text editing method to circumvent the above issues, by modeling the edit process with latent CTC alignments.
1 code implementation • 9 Oct 2023 • Yun Luo, Zhen Yang, Fandong Meng, Yingjie Li, Fang Guo, Qinglin Qi, Jie zhou, Yue Zhang
Active learning (AL), which aims to construct an effective training set by iteratively curating the most formative unlabeled data for annotation, has been widely used in low-resource tasks.
1 code implementation • 8 Oct 2023 • Guangsheng Bao, Yanbin Zhao, Zhiyang Teng, Linyi Yang, Yue Zhang
Large language models (LLMs) have shown the ability to produce fluent and cogent content, presenting both productivity opportunities and societal risks.
1 code implementation • 8 Oct 2023 • Yun Luo, Zhen Yang, Fandong Meng, Yingjie Li, Jie zhou, Yue Zhang
However, we observe that merely concatenating sentences in a contextual window does not fully utilize contextual information and can sometimes lead to excessive attention on less informative sentences.
1 code implementation • 30 Sep 2023 • Jianhao Yan, Jin Xu, Chiyu Song, Chenming Wu, Yafu Li, Yue Zhang
This paper explores the elusive mechanism underpinning in-context learning in Large Language Models (LLMs).
1 code implementation • 3 Sep 2023 • Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge.
no code implementations • 25 Aug 2023 • Yanjie Song, Yutong Wu, Yangyang Guo, Ran Yan, P. N. Suganthan, Yue Zhang, Witold Pedrycz, Swagatam Das, Rammohan Mallipeddi, Oladayo Solomon Ajani. Qiang Feng
Reinforcement learning (RL) integrated as a component in the EA framework has demonstrated superior performance in recent years.
1 code implementation • 17 Aug 2023 • Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie zhou, Yue Zhang
Catastrophic forgetting (CF) is a phenomenon that occurs in machine learning when a model forgets previously learned information while acquiring new knowledge.
no code implementations • 31 Jul 2023 • Yue Zhang, Hehe Fan, Yi Yang, Mohan Kankanhalli
The proposed method, named Mixture of Depth and Point cloud video experts (DPMix), achieved the first place in the 4D Action Segmentation Track of the HOI4D Challenge 2023.
1 code implementation • 22 Jul 2023 • Fu Lin, Haonan Gong, Mingkang Li, Zitong Wang, Yue Zhang, Xuexiong Luo
The previous works have observed that abnormal graphs mainly show node-level and graph-level anomalies, but these methods equally treat two anomaly forms above in the evaluation of abnormal graphs, which is contrary to the fact that different types of abnormal graph data have different degrees in terms of node-level and graph-level anomalies.
1 code implementation • 18 Jul 2023 • Dayu Yang, Yue Zhang, Hui Fang
Nevertheless, existing zero-shot methods face three primary limitations: they are not universally applicable to all retrievers, their effectiveness lacks sufficient explainability, and they struggle to resolve common conversational ambiguities caused by omission.
no code implementations • 17 Jul 2023 • Dayu Yang, Yue Zhang, Hui Fang
In this work, we aim to reproduce multi-stage retrieval pipelines and explore one of the potential benefits of involving mixed-initiative interaction in conversational passage retrieval scenarios: reformulating raw queries.
no code implementations • 14 Jul 2023 • Fan Ni, Xu Zhang, Jianhui Wu, Guan-Nan Dong, Aichun Zhu, Hui Liu, Yue Zhang
To the best of our knowledge, TVPRN is the first successful attempt to use video for text-based person retrieval task and has achieved state-of-the-art performance on TVPReid dataset.
no code implementations • 14 Jul 2023 • Marc Demoustier, Yue Zhang, Venkatesh Narasimha Murthy, Florin C. Ghesu, Dorin Comaniciu
Tracking the catheter tip poses different challenges: the tip can be occluded by contrast during angiography or interventional devices; and it is always in continuous movement due to the cardiac and respiratory motions.
1 code implementation • 8 Jul 2023 • Yulong Chen, Huajian Zhang, Yijie Zhou, Xuefeng Bai, Yueguan Wang, Ming Zhong, Jianhao Yan, Yafu Li, Judy Li, Michael Zhu, Yue Zhang
Additionally, based on the same intuition, we propose a 2-Step method, which takes both conversation and summary as input to simulate human annotation process.
1 code implementation • 6 Jul 2023 • Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications.
no code implementations • 1 Jul 2023 • Jiong Cai, Yong Jiang, Yue Zhang, Chengyue Jiang, Ke Yu, Jianhui Ji, Rong Xiao, Haihong Tang, Tao Wang, Zhongqiang Huang, Pengjun Xie, Fei Huang, Kewei Tu
We also show that pretraining the QE module with auto-generated QE data from user logs can further improve the overall performance.
1 code implementation • 20 Jun 2023 • Yafu Li, Leyang Cui, Jianhao Yan, Yongjing Yin, Wei Bi, Shuming Shi, Yue Zhang
Most existing text generation models follow the sequence-to-sequence paradigm.
no code implementations • 19 Jun 2023 • Dongyu Ru, Lin Qiu, Xipeng Qiu, Yue Zhang, Zheng Zhang
Discourse analysis is an important task because it models intrinsic semantic structures between sentences in a document.
1 code implementation • 15 Jun 2023 • Xiaoyi Bao, Xiaotong Jiang, Zhongqing Wang, Yue Zhang, Guodong Zhou
To address these challenges, we propose an opinion tree parsing model, aiming to parse all the sentiment elements from an opinion tree, which is much faster, and can explicitly reveal a more comprehensive and complete aspect-level sentiment structure.
2 code implementations • 8 Jun 2023 • Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, Yue Zhang
To ensure the reliability of PandaLM, we collect a diverse human-annotated test dataset, where all contexts are generated by humans and labels are aligned with human preferences.
1 code implementation • 7 Jun 2023 • Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, Linyi Yang, Wei Ye, Yue Zhang, Neil Zhenqiang Gong, Xing Xie
The increasing reliance on Large Language Models (LLMs) across academia and industry necessitates a comprehensive understanding of their robustness to prompts.
Cross-Lingual Paraphrase Identification Machine Translation +5
1 code implementation • 30 May 2023 • Yuqing Yang, Qipeng Guo, Xiangkun Hu, Yue Zhang, Xipeng Qiu, Zheng Zhang
Motivated by the fact that all event structures can be inferred from AMR, this work reformulates EAE as a link prediction problem on AMR graphs.
1 code implementation • 26 May 2023 • Cunxiang Wang, Haofei Yu, Yue Zhang
Open-Domain Question Answering (ODQA) systems necessitate a reader model capable of generating answers by simultaneously referring to multiple passages.
1 code implementation • 26 May 2023 • Cunxiang Wang, Zhikun Xu, Qipeng Guo, Xiangkun Hu, Xuefeng Bai, Zheng Zhang, Yue Zhang
The Open-Domain Question Answering (ODQA) task involves retrieving and subsequently generating answers from fine-grained relevant passages within a database.
1 code implementation • 25 May 2023 • Yue Zhang, Bo Zhang, Haochen Jiang, Zhenghua Li, Chen Li, Fei Huang, Min Zhang
We introduce NaSGEC, a new dataset to facilitate research on Chinese grammatical error correction (CGEC) for native speaker texts from multiple domains.
no code implementations • 23 May 2023 • Naihao Deng, YiKai Liu, Mingye Chen, Winston Wu, Siyang Liu, Yulong Chen, Yue Zhang, Rada Mihalcea
Our results show that our system can meet the diverse needs of NLP researchers and significantly accelerate the annotation process.
no code implementations • 23 May 2023 • Linyi Yang, Yaoxiao Song, Xuan Ren, Chenyang Lyu, Yidong Wang, Lingqiao Liu, Jindong Wang, Jennifer Foster, Yue Zhang
Machine learning (ML) systems in natural language processing (NLP) face significant challenges in generalizing to out-of-distribution (OOD) data, where the test distribution differs from the training data distribution.
1 code implementation • 22 May 2023 • Guangsheng Bao, Zhiyang Teng, Hao Zhou, Jianhao Yan, Yue Zhang
However, current NAT models still have a significant performance gap compared to their AT counterparts.
no code implementations • 22 May 2023 • Yue Zhang, Leyang Cui, Deng Cai, Xinting Huang, Tao Fang, Wei Bi
Proprietary Large Language Models (LLMs), such as ChatGPT, have garnered significant attention due to their exceptional capabilities in handling a diverse range of tasks.
1 code implementation • 22 May 2023 • Yafu Li, Qintong Li, Leyang Cui, Wei Bi, Longyue Wang, Linyi Yang, Shuming Shi, Yue Zhang
In practical scenarios, the detector faces texts from various domains or LLMs without knowing their sources.
no code implementations • 21 May 2023 • Yihua Cheng, Ziyi Zhang, Hanchen Li, Anton Arapin, Yue Zhang, Qizheng Zhang, YuHan Liu, Xu Zhang, Francis Y. Yan, Amrita Mazumdar, Nick Feamster, Junchen Jiang
In real-time video communication, retransmitting lost packets over high-latency networks is not viable due to strict latency requirements.
1 code implementation • NeurIPS 2023 • Cunxiang Wang, Sirui Cheng, Qipeng Guo, Yuanhao Yue, Bowen Ding, Zhikun Xu, Yidong Wang, Xiangkun Hu, Zheng Zhang, Yue Zhang
This study focuses on the evaluation of the Open Question Answering (Open-QA) task, which can directly estimate the factuality of large language models (LLMs).
1 code implementation • 20 May 2023 • Hanmeng Liu, Zhiyang Teng, Leyang Cui, Chaoli Zhang, Qiji Zhou, Yue Zhang
LogiCoT serves as an instruction set for teaching models of logical reasoning and elicits general reasoning skills.
no code implementations • 20 May 2023 • Yun Luo, Xiaotian Lin, Zhen Yang, Fandong Meng, Jie zhou, Yue Zhang
It is seldom considered to adapt the decision boundary for new representations and in this paper we propose a Supervised Contrastive learning framework with adaptive classification criterion for Continual Learning (SCCL), In our method, a contrastive loss is used to directly learn representations for different tasks and a limited number of data samples are saved as the classification criterion.
no code implementations • 19 May 2023 • Ya-Lin Zhang, Jun Zhou, Yankun Ren, Yue Zhang, Xinxing Yang, Meng Li, Qitao Shi, Longfei Li
In this paper, we consider the problem of long tail scenario modeling with budget limitation, i. e., insufficient human resources for model training stage and limited time and computing resources for model inference stage.
1 code implementation • 17 May 2023 • Hanxu Hu, Hongyuan Lu, Huajian Zhang, Yun-Ze Song, Wai Lam, Yue Zhang
To this end, we propose a novel method called CoS (Chain-of-Symbol Prompting) that represents the complex environments with condensed symbolic spatial representations during the chained intermediate thinking steps.
1 code implementation • 15 May 2023 • Linyi Yang, Yingpeng Ma, Yue Zhang
Using FinTrust, we show that the consistency of state-of-the-art NLP models for financial forecasting is poor.
1 code implementation • 14 May 2023 • Yingjie Niu, Linyi Yang, Ruihai Dong, Yue Zhang
Our method has been theoretically and empirically shown to be effective in enhancing the generalization ability of both generative and discriminative models.
1 code implementation • 13 May 2023 • Yu Zhang, Siqi Chen, Mingdao Wang, Xianlin Zhang, Chuang Zhu, Yue Zhang, Xueming Li
Extensive experiments demonstrate that our method outperforms other methods in maintaining temporal consistency both qualitatively and quantitatively.
1 code implementation • 12 May 2023 • Hongliang He, Junlei Zhang, Zhenzhong Lan, Yue Zhang
Contrastive learning-based methods, such as unsup-SimCSE, have achieved state-of-the-art (SOTA) performances in learning unsupervised sentence embeddings.
no code implementations • 10 May 2023 • Yun Luo, Zhen Yang, Xuefeng Bai, Fandong Meng, Jie zhou, Yue Zhang
Intuitively, the representation forgetting can influence the general knowledge stored in pre-trained language models (LMs), but the concrete effect is still unclear.
no code implementations • 8 May 2023 • Guangsheng Bao, Zhiyang Teng, Yue Zhang
Sequence-to-sequence (seq2seq) models have been widely used for natural language processing, computer vision, and other deep learning tasks.
1 code implementation • 8 May 2023 • Guangsheng Bao, Zhiyang Teng, Yue Zhang
Document-level machine translation faces the challenge of data sparsity due to its long input length and a small amount of training data, increasing the risk of learning spurious patterns.
no code implementations • 3 May 2023 • Zebin Ou, Yue Zhang
Robust loss functions are designed to combat the adverse impacts of label noise, whose robustness is typically supported by theoretical bounds agnostic to the training dynamics.
1 code implementation • 26 Apr 2023 • Haiqin Xie, Cheng Wang, Shicheng Li, Yue Zhang, Shanshan Wang
In the realm of urban transportation, metro systems serve as crucial and sustainable means of public transit.
no code implementations • 14 Apr 2023 • Jiahua Dong, Guohua Cheng, Yue Zhang, Chengtao Peng, Yu Song, Ruofeng Tong, Lanfen Lin, Yen-Wei Chen
Multi-organ segmentation, which identifies and separates different organs in medical images, is a fundamental task in medical image analysis.
1 code implementation • 13 Apr 2023 • Siqi Chen, Xueming Li, Xianlin Zhang, Mingdao Wang, Yu Zhang, Yue Zhang
Previous methods search for correspondence across the entire reference image, and this type of global matching is easy to get mismatch.
no code implementations • 11 Apr 2023 • Yue Zhang, Chengtao Peng, Qiuli Wang, Dan Song, Kaiyan Li, S. Kevin Zhou
Besides, we propose a Dynamic Feature Unification Module to integrate information from a varying number of available modalities, which enables the network to be robust to random missing modalities.
1 code implementation • 7 Apr 2023 • Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, Yue Zhang
With the release of Generative Pretrained Transformer 4 (GPT-4), highlighted as "advanced" at reasoning tasks, we are eager to learn the GPT-4 performance on various logical reasoning tasks.
1 code implementation • 7 Apr 2023 • Guangsheng Bao, Zebin Ou, Yue Zhang
To address this issue, we propose an adaptive model, GEMINI, that integrates a rewriter and a generator to mimic the sentence rewriting and abstracting techniques, respectively.
no code implementations • 4 Apr 2023 • Tao Fang, Shu Yang, Kaixin Lan, Derek F. Wong, Jinpeng Hu, Lidia S. Chao, Yue Zhang
To showcase its capabilities in GEC, we design zero-shot chain-of-thought (CoT) and few-shot CoT settings using in-context learning for ChatGPT.
no code implementations • 27 Mar 2023 • Siqi Chen, Xueming Li, Xianlin Zhang, Mingdao Wang, Yu Zhang, Jiatong Han, Yue Zhang
Exemplar-based video colorization is an essential technique for applications like old movie restoration.
1 code implementation • 26 Mar 2023 • Yue Zhang, Suchen Wang, Shichao Kan, Zhenyu Weng, Yigang Cen, Yap-Peng Tan
Our key idea is to formulate the POAR problem as an image-text search problem.
1 code implementation • 22 Mar 2023 • Fengji Zhang, Bei Chen, Yue Zhang, Jacky Keung, Jin Liu, Daoguang Zan, Yi Mao, Jian-Guang Lou, Weizhu Chen
The task of repository-level code completion is to continue writing the unfinished code based on a broader context of the repository.
no code implementations • 15 Mar 2023 • Han Yang, Qiuli Wang, Yue Zhang, Zhulin An, Chen Liu, Xiaohong Zhang, S. Kevin Zhou
Radiologists possess diverse training and clinical experiences, leading to variations in the segmentation annotations of lung nodules and resulting in segmentation uncertainty. Conventional methods typically select a single annotation as the learning target or attempt to learn a latent space comprising multiple annotations.
1 code implementation • Conference 2023 • Zhiyang Teng, Chenhua Chen, Yan Zhang, Yue Zhang
Experiments on various text generation benchmarks show the effectiveness of our proposed method.
1 code implementation • 22 Feb 2023 • Jindong Wang, Xixu Hu, Wenxin Hou, Hao Chen, Runkai Zheng, Yidong Wang, Linyi Yang, Haojun Huang, Wei Ye, Xiubo Geng, Binxin Jiao, Yue Zhang, Xing Xie
In this paper, we conduct a thorough evaluation of the robustness of ChatGPT from the adversarial and out-of-distribution (OOD) perspective.
1 code implementation • 18 Feb 2023 • Yue Zhang, Parisa Kordjamshidi
The mentioned landmarks are not recognizable by the navigation agent due to the different vision abilities of the instructor and the modeled agent.
1 code implementation • 16 Feb 2023 • Hossein Rajaby Faghihi, Aliakbar Nafar, Chen Zheng, Roshanak Mirzaee, Yue Zhang, Andrzej Uszok, Alexander Wan, Tanawan Premsri, Dan Roth, Parisa Kordjamshidi
Recent research has shown that integrating domain knowledge into deep learning architectures is effective -- it helps reduce the amount of required data, improves the accuracy of the models' decisions, and improves the interpretability of models.
1 code implementation • 8 Feb 2023 • Yun Luo, Zihan Liu, Stan Z. Li, Yue Zhang
(Dis)agreement detection aims to identify the authors' attitudes or positions (\textit{{agree, disagree, neutral}}) towards a specific text.
no code implementations • 3 Feb 2023 • Hongmin Cai, Fei Qi, Junyu Li, Yu Hu, Yue Zhang, Yiu-ming Cheung, Bin Hu
Conventional clustering methods based on pairwise affinity usually suffer from the concentration effect while processing huge dimensional features yet low sample sizes data, resulting in inaccuracy to encode the sample proximity and suboptimal performance in clustering.
no code implementations • 27 Jan 2023 • Yaoxian Song, Penglei Sun, Yi Ren, Yu Zheng, Yue Zhang
To evaluate the effectiveness, we perform multi-level difficulty part language grounding grasping experiments and deploy our proposed model on a real robot.
no code implementations • 17 Dec 2022 • Chenyang Lyu, Linyi Yang, Yue Zhang, Yvette Graham, Jennifer Foster
User and product information associated with a review is useful for sentiment polarity prediction.
no code implementations • 8 Dec 2022 • Jianhao Yan, Jin Xu, Fandong Meng, Jie zhou, Yue Zhang
In this work, we show that the issue arises from the un-consistency of label smoothing on the token-level and sequence-level distributions.
1 code implementation • The 33rd British Machine Vision Conference 2022 • Ji Huang, Chao Liang, Yue Zhang, Zhongyuan Wang, Chunjie Zhang
Existing RA work can be generally divided into unsupervised methods and fully-supervised methods.
1 code implementation • 17 Nov 2022 • Yulong Chen, Yang Liu, Ruochen Xu, ZiYi Yang, Chenguang Zhu, Michael Zeng, Yue Zhang
The high annotation costs and diverse demands of various summarization tasks motivate the development of few-shot summarization.
1 code implementation • 15 Nov 2022 • Linyi Yang, Shuibai Zhang, Libo Qin, Yafu Li, Yidong Wang, Hanmeng Liu, Jindong Wang, Xing Xie, Yue Zhang
Pre-trained language models (PLMs) are known to improve the generalization performance of natural language understanding models by leveraging large amounts of data during the pre-training phase.
Natural Language Understanding Out-of-Distribution Generalization
no code implementations • 15 Nov 2022 • Yue Zhang, Zhenghua Li
Recently, Zhang et al. (2022) propose a syntax-aware grammatical error correction (GEC) approach, named SynGEC, showing that incorporating tailored dependency-based syntax of the input sentence is quite beneficial to GEC.
1 code implementation • 31 Oct 2022 • Tengxiao Liu, Qipeng Guo, Xiangkun Hu, Yue Zhang, Xipeng Qiu, Zheng Zhang
RLET iteratively performs single step reasoning with sentence selection and deduction generation modules, from which the training signal is accumulated across the tree with elaborately designed aligned reward function that is consistent with the evaluation.
1 code implementation • ACL 2022 • Jiangbin Zheng, Yile Wang, Ge Wang, Jun Xia, Yufei Huang, Guojiang Zhao, Yue Zhang, Stan Z. Li
Although contextualized embeddings generated from large-scale pre-trained models perform well in many tasks, traditional static embeddings (e. g., Skip-gram, Word2Vec) still play an important role in low-resource and lightweight settings due to their low computational cost, ease of deployment, and stability.
1 code implementation • 22 Oct 2022 • Yue Zhang, Bo Zhang, Zhenghua Li, Zuyi Bao, Chen Li, Min Zhang
Then, we obtain parse trees of the source incorrect sentences by projecting trees of the target correct sentences.
1 code implementation • 22 Oct 2022 • Xuefeng Bai, Seng Yang, Leyang Cui, Linfeng Song, Yue Zhang
Based on our observation, we investigate two approaches to reduce the domain distribution divergence of text and AMR features, respectively.
1 code implementation • 20 Oct 2022 • Yafu Li, Leyang Cui, Yongjing Yin, Yue Zhang
Despite low latency, non-autoregressive machine translation (NAT) suffers severe performance deterioration due to the naive independence assumption.
1 code implementation • 20 Oct 2022 • Marcos V. Conde, Radu Timofte, Yibin Huang, Jingyang Peng, Chang Chen, Cheng Li, Eduardo Pérez-Pellitero, Fenglong Song, Furui Bai, Shuai Liu, Chaoyu Feng, Xiaotao Wang, Lei Lei, Yu Zhu, Chenghua Li, Yingying Jiang, Yong A, Peisong Wang, Cong Leng, Jian Cheng, Xiaoyu Liu, Zhicun Yin, Zhilu Zhang, Junyi Li, Ming Liu, WangMeng Zuo, Jun Jiang, Jinha Kim, Yue Zhang, Beiji Zou, Zhikai Zong, Xiaoxiao Liu, Juan Marín Vega, Michael Sloth, Peter Schneider-Kamp, Richard Röttger, Furkan Kınlı, Barış Özcan, Furkan Kıraç, Li Leyi, SM Nadim Uddin, Dipon Kumar Ghosh, Yong Ju Jung
Cameras capture sensor RAW images and transform them into pleasant RGB images, suitable for the human eyes, using their integrated Image Signal Processor (ISP).
no code implementations • 19 Oct 2022 • Yue Zhang, Hongliang Fei, Dingcheng Li, Tan Yu, Ping Li
In particular, we focus on few-shot image recognition tasks on pretrained vision-language models (PVLMs) and develop a method of prompting through prototype (PTP), where we define $K$ image prototypes and $K$ prompt prototypes.
no code implementations • 18 Oct 2022 • Yue Zhang, Hongliang Fei, Ping Li
Specifically, we build a noise model to estimate the unknown labeling noise distribution over input contexts and noisy type labels.
no code implementations • COLING 2022 • Yongjing Yin, Yafu Li, Fandong Meng, Jie zhou, Yue Zhang
Modern neural machine translation (NMT) models have achieved competitive performance in standard benchmarks.
1 code implementation • COLING 2022 • Yue Zhang, Parisa Kordjamshidi
Understanding spatial and visual information is essential for a navigation agent who follows natural language instructions.
1 code implementation • COLING 2022 • Xuefeng Bai, Linfeng Song, Yue Zhang
However, these models are typically trained on surface dialogue text, thus are proven to be weak in understanding the main semantic meaning of a dialogue context.
no code implementations • 15 Sep 2022 • Ziqi Zhang, Yile Wang, Yue Zhang, Donglin Wang
Experimental results show that our RL pre-trained models can give close performance compared with the models using the LM training objective, showing that there exist common useful features across these two modalities.
1 code implementation • 8 Sep 2022 • Yile Wang, Linyi Yang, Zhiyang Teng, Ming Zhou, Yue Zhang
Transformer-based pre-trained models have gained much advance in recent years, becoming one of the most important backbones in natural language processing.
1 code implementation • COLING 2022 • Linyi Yang, Lifan Yuan, Leyang Cui, Wenyang Gao, Yue Zhang
Few-shot Named Entity Recognition (NER) is imperative for entity tagging in limited resource domains and thus received proper attention in recent years.
1 code implementation • COLING 2022 • Naihao Deng, Yulong Chen, Yue Zhang
Text-to-SQL has attracted attention from both the natural language processing and database communities because of its ability to convert the semantics in natural language into SQL queries and its practical application in building natural language interfaces to database systems.
no code implementations • 20 Aug 2022 • Yile Wang, Yue Zhang
We quantify how much the contextualized embeddings of each word sense vary across contexts in typical pre-trained models.
1 code implementation • COLING 2022 • Yun Luo, Fang Guo, Zihan Liu, Yue Zhang
Cross-domain sentiment analysis aims to predict the sentiment of texts in the target domain using the model trained on the source domain to cope with the scarcity of labeled data.
1 code implementation • COLING 2022 • Yun Luo, Zihan Liu, Yuefeng Shi, Stan Z Li, Yue Zhang
Meanwhile, ablation studies prove the significance of each module in our model.
no code implementations • 18 Aug 2022 • Pai Liu, Wenyang Gao, Wenjie Dong, Songfang Huang, Yue Zhang
Open information extraction is an important NLP task that targets extracting structured information from unstructured text without limitations on the relation type or the domain of the text.
1 code implementation • COLING 2022 • Yidong Wang, Hao Wu, Ao Liu, Wenxin Hou, Zhen Wu, Jindong Wang, Takahiro Shinozaki, Manabu Okumura, Yue Zhang
Limited labeled data increase the risk of distribution shift between test data and training data.
4 code implementations • 12 Aug 2022 • Yidong Wang, Hao Chen, Yue Fan, Wang Sun, Ran Tao, Wenxin Hou, RenJie Wang, Linyi Yang, Zhi Zhou, Lan-Zhe Guo, Heli Qi, Zhen Wu, Yu-Feng Li, Satoshi Nakamura, Wei Ye, Marios Savvides, Bhiksha Raj, Takahiro Shinozaki, Bernt Schiele, Jindong Wang, Xing Xie, Yue Zhang
We further provide the pre-trained versions of the state-of-the-art neural models for CV tasks to make the cost affordable for further tuning.
no code implementations • 10 Aug 2022 • Yingzi Fan, Longfei Han, Yue Zhang, Lechao Cheng, Chen Xia, Di Hu
The domain discrepancy induces to performance degradation on target testing data for CNN models.
1 code implementation • 8 Aug 2022 • Yulong Chen, Naihao Deng, Yang Liu, Yue Zhang
We report the results of DialogSum Challenge, the shared task on summarizing real-life scenario dialogues at INLG 2022.
no code implementations • 26 Jul 2022 • Yue Zhang, Yajie Zou, Yuanchang Xie, Lei Chen
A quantitative understanding of dynamic lane-changing (LC) interaction patterns is indispensable for improving the decision-making of autonomous vehicles, especially in mixed traffic with human-driven vehicles.
1 code implementation • 13 Jul 2022 • Guangsheng Bao, Yue Zhang
The rewriting method for text summarization combines extractive and abstractive approaches, improving the conciseness and readability of extractive summaries using an abstractive model.
2 code implementations • 23 Jun 2022 • Yue Zhang, Haochen Jiang, Zuyi Bao, Bo Zhang, Chen Li, Zhenghua Li
We have accumulated 1, 119 error templates for Chinese GEC based on this method.
no code implementations • Findings (ACL) 2022 • Li Du, Xiao Ding, Yue Zhang, Kai Xiong, Ting Liu, Bing Qin
To this end, we incorporate an additional structured variable into BERT to learn to predict the event connections in the training process.
2 code implementations • 1 May 2022 • Yulong Chen, Ming Zhong, Xuefeng Bai, Naihao Deng, Jing Li, Xianchao Zhu, Yue Zhang
We propose the shared task of cross-lingual conversation summarization, \emph{ConvSumX Challenge}, opening new avenues for researchers to investigate solutions that integrate conversation summarization and machine translation.
Abstractive Dialogue Summarization Cross-Lingual Abstractive Summarization +3
2 code implementations • NAACL 2022 • Yue Zhang, Zhenghua Li, Zuyi Bao, Jiacheng Li, Bo Zhang, Chen Li, Fei Huang, Min Zhang
This paper presents MuCGEC, a multi-reference multi-source evaluation dataset for Chinese Grammatical Error Correction (CGEC), consisting of 7, 063 sentences collected from three Chinese-as-a-Second-Language (CSL) learner sources.
1 code implementation • 23 Apr 2022 • Yizhe Xu, Tom H. Greene, Adam P. Bress, Brandon K. Bellows, Yue Zhang, Zugui Zhang, Paul Kolm, William S. Weintraub, Andrew S. Moran, Jincheng Shen
Evidence from observational studies has become increasingly important for supporting healthcare policy making via cost-effectiveness (CE) analyses.
no code implementations • 17 Apr 2022 • Cunxiang Wang, Fuli Luo, Yanyang Li, Runxin Xu, Fei Huang, Yue Zhang
Pre-trained language models (PLMs) like BERT have made significant progress in various downstream NLP tasks.
1 code implementation • 15 Apr 2022 • Linyi Yang, Zhen Wang, Yuxiang Wu, Jie Yang, Yue Zhang
Understanding causality is key to the success of NLP applications, especially in high-stakes domains.
1 code implementation • COLING 2022 • Zebin Ou, Meishan Zhang, Yue Zhang
Word ordering is a constrained language generation task taking unordered words as input.
no code implementations • 14 Apr 2022 • Yun Luo, Hongjie Cai, Linyi Yang, Yanxia Qin, Rui Xia, Yue Zhang
Since previous studies on open-domain targeted sentiment analysis are limited in dataset domain variety and sentence level, we propose a novel dataset consisting of 6, 013 human-labeled data to extend the data domains in topics of interest and document level.
1 code implementation • ACL 2022 • Jinghui Lu, Linyi Yang, Brian Mac Namee, Yue Zhang
We present a novel rationale-centric framework with human-in-the-loop -- Rationales-centric Double-robustness Learning (RDL) -- to boost model out-of-distribution performance in few-shot learning scenarios.
1 code implementation • 16 Mar 2022 • ZiFan Chen, Jie Zhao, Hao Yu, Yue Zhang, Li Zhang
Accurate and efficient lumbar spine disease identification is crucial for clinical diagnosis.
2 code implementations • ACL 2022 • Xuefeng Bai, Yulong Chen, Yue Zhang
To our knowledge, we are the first to consider pre-training on semantic graphs.
Ranked #1 on AMR-to-Text Generation on Bio (BLEU metric, using extra training data)
no code implementations • 7 Mar 2022 • Leyang Cui, Fandong Meng, Yijin Liu, Jie zhou, Yue Zhang
Although pre-trained sequence-to-sequence models have achieved great success in dialogue response generation, chatbots still suffer from generating inconsistent responses in real-world practice, especially in multi-turn settings.
no code implementations • 2 Mar 2022 • Sen yang, Yunchen Zhang, Leyang Cui, Yue Zhang
Thanks to the advanced improvement of large pre-trained language models, prompt-based fine-tuning is shown to be effective on a variety of downstream tasks.
no code implementations • 10 Feb 2022 • Yulong Chen, Yang Liu, Li Dong, Shuohang Wang, Chenguang Zhu, Michael Zeng, Yue Zhang
However, for prompt learning, there are still two salient gaps between NLP tasks and pretraining.
no code implementations • 9 Feb 2022 • Jian Zhao, Yue Zhang, Xunhan Hu, Weixun Wang, Wengang Zhou, Jianye Hao, Jiangcheng Zhu, Houqiang Li
In cooperative multi-agent systems, agents jointly take actions and receive a team reward instead of individual rewards.
no code implementations • 5 Jan 2022 • Linyi Yang, Jiazheng Li, Ruihai Dong, Yue Zhang, Barry Smyth
Financial forecasting has been an important and active area of machine learning research because of the challenges it presents and the potential rewards that even minor improvements in prediction accuracy or forecasting may entail.
2 code implementations • 6 Dec 2021 • Kaustubh D. Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abinaya Mahendiran, Simon Mille, Ashish Shrivastava, Samson Tan, Tongshuang Wu, Jascha Sohl-Dickstein, Jinho D. Choi, Eduard Hovy, Ondrej Dusek, Sebastian Ruder, Sajant Anand, Nagender Aneja, Rabin Banjade, Lisa Barthe, Hanna Behnke, Ian Berlot-Attwell, Connor Boyle, Caroline Brun, Marco Antonio Sobrevilla Cabezudo, Samuel Cahyawijaya, Emile Chapuis, Wanxiang Che, Mukund Choudhary, Christian Clauss, Pierre Colombo, Filip Cornell, Gautier Dagan, Mayukh Das, Tanay Dixit, Thomas Dopierre, Paul-Alexis Dray, Suchitra Dubey, Tatiana Ekeinhor, Marco Di Giovanni, Tanya Goyal, Rishabh Gupta, Louanes Hamla, Sang Han, Fabrice Harel-Canada, Antoine Honore, Ishan Jindal, Przemyslaw K. Joniak, Denis Kleyko, Venelin Kovatchev, Kalpesh Krishna, Ashutosh Kumar, Stefan Langer, Seungjae Ryan Lee, Corey James Levinson, Hualou Liang, Kaizhao Liang, Zhexiong Liu, Andrey Lukyanenko, Vukosi Marivate, Gerard de Melo, Simon Meoni, Maxime Meyer, Afnan Mir, Nafise Sadat Moosavi, Niklas Muennighoff, Timothy Sum Hon Mun, Kenton Murray, Marcin Namysl, Maria Obedkova, Priti Oli, Nivranshu Pasricha, Jan Pfister, Richard Plant, Vinay Prabhu, Vasile Pais, Libo Qin, Shahab Raji, Pawan Kumar Rajpoot, Vikas Raunak, Roy Rinberg, Nicolas Roberts, Juan Diego Rodriguez, Claude Roux, Vasconcellos P. H. S., Ananya B. Sai, Robin M. Schmidt, Thomas Scialom, Tshephisho Sefara, Saqib N. Shamsi, Xudong Shen, Haoyue Shi, Yiwen Shi, Anna Shvets, Nick Siegel, Damien Sileo, Jamie Simon, Chandan Singh, Roman Sitelew, Priyank Soni, Taylor Sorensen, William Soto, Aman Srivastava, KV Aditya Srivatsa, Tony Sun, Mukund Varma T, A Tabassum, Fiona Anting Tan, Ryan Teehan, Mo Tiwari, Marie Tolkiehn, Athena Wang, Zijian Wang, Gloria Wang, Zijie J. Wang, Fuxuan Wei, Bryan Wilie, Genta Indra Winata, Xinyi Wu, Witold Wydmański, Tianbao Xie, Usama Yaseen, Michael A. Yee, Jing Zhang, Yue Zhang
Data augmentation is an important component in the robustness evaluation of models in natural language processing (NLP) and in enhancing the diversity of the data they are trained on.
no code implementations • IEEE 2021 • Hao Fei, Yafeng Ren, Yue Zhang, Donghong Ji
Aspect-based sentiment triplet extraction (ASTE) aims at recognizing the joint triplets from texts, i. e., aspect terms, opinion expressions, and correlated sentiment polarities.
Ranked #3 on Aspect Sentiment Triplet Extraction on ASTE-Data-V2
no code implementations • 30 Oct 2021 • Yanrui Niu, Jingyao Yang, Ankang Lu, Baojin Huang, Yue Zhang, Ji Huang, Shishi Wen, Dongshu Xu, Chao Liang, Zhongyuan Wang, Jun Chen
We will make a brief introduction of the experimental methods and results of the WHU-NERCMS in the TRECVID2021 in the paper.
1 code implementation • 23 Oct 2021 • Yue Zhang, Chao Liang, Longxiang Jiang
To address this issue, we propose a confidence-aware active feedback method (CAAF) that is specifically designed for online RF in interactive INS tasks.
no code implementations • EMNLP 2021 • Yue Zhang, Bo Zhang, Rui Wang, Junjie Cao, Chen Li, Zuyi Bao
Previous works on key information extraction from visually rich documents (VRDs) mainly focus on labeling the text within each bounding box (i. e., semantic entity), while the relations in-between are largely unexplored.
Ranked #4 on Entity Linking on FUNSD
1 code implementation • 15 Oct 2021 • Mohsen Yavartanoo, Shih-Hsuan Hung, Reyhaneh Neshatavar, Yue Zhang, Kyoung Mu Lee
3D shape representation and its processing have substantial effects on 3D shape recognition.
Ranked #1 on 3D Object Classification on ModelNet10
1 code implementation • EMNLP 2021 • Jian Liu, Zhiyang Teng, Leyang Cui, Hanmeng Liu, Yue Zhang
Aspect category sentiment analysis has attracted increasing research attention.
no code implementations • 29 Sep 2021 • Xinbo Zhang, Changzhi Sun, Yue Zhang, Lei LI, Hao Zhou
Logical reasoning over natural text is an important capability towards human level intelligence.
1 code implementation • ACL 2022 • Leyang Cui, Sen yang, Yue Zhang
Besides, our method achieves state-of-the-art BERT-based performance on PTB (95. 92 F1) and strong performance on CTB (92. 31 F1).
Ranked #6 on Constituency Parsing on Penn Treebank
no code implementations • 14 Sep 2021 • Pinyuan Zhong, Yue Zhang, Xiaoying Tang
The hippocampal surface was then generated from the mean shape and the shape variation parameters.
1 code implementation • EMNLP 2021 • Leyang Cui, Yu Wu, Shujie Liu, Yue Zhang
To deal with this problem, instead of introducing knowledge base as the input, we force the model to learn a better semantic representation by predicting the information in the knowledge base, only based on the input context.
1 code implementation • EMNLP 2021 • Leonardo F. R. Ribeiro, Jonas Pfeiffer, Yue Zhang, Iryna Gurevych
Recent work on multilingual AMR-to-text generation has exclusively focused on data augmentation strategies that utilize silver AMR.
no code implementations • 15 Aug 2021 • Cunxiang Wang, Boyuan Zheng, Yuchen Niu, Yue Zhang
To quantitatively and intuitively explore the generalization ability of pre-trained language models (PLMs), we have designed several tasks of arithmetic and logical reasoning.
no code implementations • 2 Aug 2021 • Yue Zhang, Chengtao Peng, Liying Peng, Huimin Huang, Ruofeng Tong, Lanfen Lin, Jingsong Li, Yen-Wei Chen, Qingqing Chen, Hongjie Hu, Zhiyi Peng
In this work, we propose a novel LiTS method to adequately aggregate multi-phase information and refine uncertain region segmentation.
1 code implementation • ACL 2021 • Qiankun Fu, Linfeng Song, Wenyu Du, Yue Zhang
Although parsing to Abstract Meaning Representation (AMR) has become very popular and AMR has been shown effective on the many sentence-level downstream tasks, little work has studied how to generate AMRs that can represent multi-sentence information.
no code implementations • 31 Jul 2021 • Yue Zhang, Yajie Zou, Lingtao Wuand Wanbing Han
This study develops a primitive-based framework to identify the driving patterns during merging processes and reveal the evolutionary mechanism at freeway on-ramps in congested traffic flow.
1 code implementation • 3 Jul 2021 • Yue Jin, Yue Zhang, Tao Qin, Xudong Zhang, Jian Yuan, Houqiang Li, Tie-Yan Liu
Inspired by the two observations, in this work, we study a new problem, supervised off-policy ranking (SOPR), which aims to rank a set of target policies based on supervised learning by leveraging off-policy data and policies with known performance.
no code implementations • 29 Jun 2021 • Hongxiu Zhao, Xun Zhang, Faouzi Bader, Yue Zhang
Many algorithms for visible light positioning (VLP) localization do not consider the shapes of the transmitters, which leads to the impracticality of the algorithm and the low localization accuracy.
1 code implementation • ACL 2021 • Linyi Yang, Jiazheng Li, Pádraig Cunningham, Yue Zhang, Barry Smyth, Ruihai Dong
While state-of-the-art NLP models have been achieving the excellent performance of a wide range of tasks in recent years, important questions are being raised about their robustness and their underlying sensitivity to systematic biases that may exist in their training and test data.
1 code implementation • ACL 2021 • Cunxiang Wang, Pai Liu, Yue Zhang
Recent work has investigated the interesting question using pre-trained language models (PLMs) as knowledge bases for answering open questions.
1 code implementation • Findings (ACL) 2021 • Leyang Cui, Yu Wu, Jian Liu, Sen yang, Yue Zhang
To address the issue, we propose a template-based method for NER, treating NER as a language model ranking problem in a sequence-to-sequence framework, where original sentences and statement templates filled by candidate named entity span are regarded as the source sequence and the target sequence, respectively.
1 code implementation • NAACL 2021 • Qingrong Xia, Bo Zhang, Rui Wang, Zhenghua Li, Yue Zhang, Fei Huang, Luo Si, Min Zhang
Fine-grained opinion mining (OM) has achieved increasing attraction in the natural language processing (NLP) community, which aims to find the opinion structures of {``}Who expressed what opinions towards what{''} in one sentence.
1 code implementation • ACL 2021 • Yafu Li, Yongjing Yin, Yulong Chen, Yue Zhang
Modern neural machine translation (NMT) models have achieved competitive performance in standard benchmarks such as WMT.
1 code implementation • ACL 2021 • Guangsheng Bao, Yue Zhang, Zhiyang Teng, Boxing Chen, Weihua Luo
However, study shows that when we further enlarge the translation unit to a whole document, supervised training of Transformer can fail.