no code implementations • EMNLP 2021 • Wei Zhu, Xiaoling Wang, Yuan Ni, Guotong Xie
From this observation, we use mutual learning to improve BERT’s early exiting performances, that is, we ask each exit of a multi-exit BERT to distill knowledge from each other.
no code implementations • EMNLP 2021 • Tao Ji, Yong Jiang, Tao Wang, Zhongqiang Huang, Fei Huang, Yuanbin Wu, Xiaoling Wang
Transition systems usually contain various dynamic structures (e. g., stacks, buffers).
no code implementations • EMNLP 2021 • Tao Ji, Yong Jiang, Tao Wang, Zhongqiang Huang, Fei Huang, Yuanbin Wu, Xiaoling Wang
Adapting word order from one language to another is a key problem in cross-lingual structured prediction.
1 code implementation • Findings (NAACL) 2022 • Senhui Zhang, Tao Ji, Wendi Ji, Xiaoling Wang
Event detection is a classic natural language processing task.
no code implementations • NAACL (BioNLP) 2021 • Wei Zhu, Yilong He, Ling Chai, Yunxiao Fan, Yuan Ni, Guotong Xie, Xiaoling Wang
First a RoBERTa model is first applied to give a local ranking of the candidate sentences.
no code implementations • 15 May 2024 • Xin Yi, Shunfan Zheng, LinLin Wang, Xiaoling Wang, Liang He
We validate that our safety realignment framework satisfies the safety requirements of a single fine-tuned model as well as multiple models during their fusion.
1 code implementation • 8 May 2024 • Yihan Mei, Xinyu Wang, Dell Zhang, Xiaoling Wang
Our findings indicate that the application of spectral normalization to joint energy scores notably amplifies the model's capability for OOD detection.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • 18 Apr 2024 • Jie Wang, Tao Ji, Yuanbin Wu, Hang Yan, Tao Gui, Qi Zhang, Xuanjing Huang, Xiaoling Wang
Generalizing to longer sentences is important for recent Transformer-based language models.
no code implementations • 23 Feb 2024 • Xin Yi, LinLin Wang, Xiaoling Wang, Liang He
In this paper, we propose fine-grained detoxification via instance-level prefixes (FGDILP) to mitigate toxic text without additional cost.
no code implementations • 5 Jan 2024 • Chuyun Shen, Wenhao Li, Haoqing Chen, Xiaoling Wang, Fengping Zhu, Yuxin Li, Xiangfeng Wang, Bo Jin
CIML adopts the idea of addition and removes inter-modal redundant information through inductive bias-driven task decomposition and message passing-based redundancy filtering.
1 code implementation • 4 Jan 2024 • Wei Zhu, Wenfeng Li, Xing Tian, Pengfei Wang, Xiaoling Wang, Jin Chen, Yuanbin Wu, Yuan Ni, Guotong Xie
In this work, we propose a novel task, Text2MDT, to explore the automatic extraction of MDTs from medical texts such as medical guidelines and textbooks.
1 code implementation • 29 Dec 2023 • Wei Zhu, Xiaoling Wang, Mosha Chen, Buzhou Tang
Many teams from both the industry and academia participated in the shared tasks, and the top teams achieved amazing test results.
1 code implementation • 22 Oct 2023 • Wei Zhu, Xiaoling Wang, Huanran Zheng, Mosha Chen, Buzhou Tang
Biomedical language understanding benchmarks are the driving forces for artificial intelligence applications with large language model (LLM) back-ends.
1 code implementation • 7 May 2023 • Xiaonan Li, Kai Lv, Hang Yan, Tianyang Lin, Wei Zhu, Yuan Ni, Guotong Xie, Xiaoling Wang, Xipeng Qiu
To train UDR, we cast various tasks' training signals into a unified list-wise ranking formulation by language model's feedback.
1 code implementation • 27 Jan 2023 • Huanran Zheng, Wei Zhu, Pengfei Wang, Xiaoling Wang
In this paper, we propose a simple but effective method called "Candidate Soups," which can obtain high-quality translations while maintaining the inference speed of NAT models.
no code implementations • 14 Aug 2022 • Wenyan Liu, Juncheng Wan, Xiaoling Wang, Weinan Zhang, Dell Zhang, Hang Li
In this paper, we investigate fast machine unlearning techniques for recommender systems that can remove the effect of a small amount of training data from the recommendation model without incurring the full cost of retraining.
no code implementations • 5 Apr 2022 • Jiahao Yuan, Wendi Ji, Dell Zhang, Jinwei Pan, Xiaoling Wang
Specifically, we identify two different patterns of micro-behaviors: "sequential patterns" and "dyadic relational patterns".
no code implementations • 24 Feb 2022 • Jiahao Yuan, Zhao Li, Pengcheng Zou, Xuan Gao, Jinwei Pan, Wendi Ji, Xiaoling Wang
In online shopping, ever-changing fashion trends make merchants need to prepare more differentiated products to meet the diversified demands, and e-commerce platforms need to capture the market trend with a prophetic vision.
no code implementations • NAACL 2021 • Wei Zhu, Yuan Ni, Xiaoling Wang, Guotong Xie
In developing an online question-answering system for the medical domains, natural language inference (NLI) models play a central role in question matching and intention detection.
no code implementations • 1 Jan 2021 • Wenyan Liu, Xiangfeng Wang, Xingjian Lu, Junhong Cheng, Bo Jin, Xiaoling Wang, Hongyuan Zha
This paper proposes a fair differential privacy algorithm (FairDP) to mitigate the disparate impact on model accuracy for each class.
1 code implementation • 3 Nov 2020 • Wayne Xin Zhao, Shanlei Mu, Yupeng Hou, Zihan Lin, Yushuo Chen, Xingyu Pan, Kaiyuan Li, Yujie Lu, Hui Wang, Changxin Tian, Yingqian Min, Zhichao Feng, Xinyan Fan, Xu Chen, Pengfei Wang, Wendi Ji, Yaliang Li, Xiaoling Wang, Ji-Rong Wen
In this library, we implement 73 recommendation models on 28 benchmark datasets, covering the categories of general recommendation, sequential recommendation, context-aware recommendation and knowledge-based recommendation.
3 code implementations • 4 Sep 2020 • Wei Zhu, Xiaoling Wang, Xipeng Qiu, Yuan Ni, Guotong Xie
Though the transformer architectures have shown dominance in many natural language understanding tasks, there are still unsolved issues for the training of transformer models, especially the need for a principled way of warm-up which has shown importance for stable training of a transformer, as well as whether the task at hand prefer to scale the attention product or not.
no code implementations • 18 May 2020 • Wendi Ji, Keqiang Wang, Xiaoling Wang, TingWei Chen, Alexandra Cristea
Recommendation systems aim to assist users to discover most preferred contents from an ever-growing corpus of items.
no code implementations • 20 Feb 2020 • Yuanyuan Jin, Wei zhang, Xiangnan He, Xinyu Wang, Xiaoling Wang
Given a set of symptoms to treat, we aim to generate an overall syndrome representation by effectively fusing the embeddings of all the symptoms in the set, to mimic how a doctor induces the syndromes.
no code implementations • TACL 2018 • Dell Zhang, Jiahao Yuan, Xiaoling Wang, Adam Foster
In data-to-text Natural Language Generation (NLG) systems, computers need to find the right words to describe phenomena seen in the data.
no code implementations • 19 Nov 2015 • Ting Peng, Aiping Qu, Xiaoling Wang
Then, the whole picture of particles is splintered to small rectangles with the same shape.