no code implementations • NAACL (NLP4IF) 2021 • Chaoyuan Zuo, Qi Zhang, Ritwik Banerjee
We present a health news classification task to determine whether medical news articles satisfy a set of review criteria deemed important by medical experts and health care journalists.
no code implementations • COLING 2022 • Rui Zheng, Rong Bao, Qin Liu, Tao Gui, Qi Zhang, Xuanjing Huang, Rui Xie, Wei Wu
To reduce the potential side effects of using defense modules, we further propose a novel forgetting restricted adversarial training, which filters out bad adversarial examples that impair the performance of original ones.
1 code implementation • COLING 2022 • Lei Chen, Guanying Li, Zhongyu Wei, Yang Yang, Baohua Zhou, Qi Zhang, Xuanjing Huang
Existing works on rumor resolution have shown great potential in recognizing word appearance and user participation.
1 code implementation • ACL 2022 • Zichu Fei, Qi Zhang, Tao Gui, Di Liang, Sirui Wang, Wei Wu, Xuanjing Huang
CQG employs a simple method to generate the multi-hop questions that contain key entities in multi-hop reasoning chains, which ensure the complexity and quality of the questions.
2 code implementations • ACL 2022 • Qin Liu, Rui Zheng, Bao Rong, Jingyi Liu, Zhihua Liu, Zhanzhan Cheng, Liang Qiao, Tao Gui, Qi Zhang, Xuanjing Huang
Adversarial robustness has attracted much attention recently, and the mainstream solution is adversarial training.
no code implementations • COLING 2022 • Zichu Fei, Xin Zhou, Tao Gui, Qi Zhang, Xuanjing Huang
Existing KBQG models still face two main challenges: (1) Most models often focus on the most relevant part of the answer entity, while neglecting the rest of the subgraph.
1 code implementation • EMNLP 2020 • Siyuan Wang, Zhongyu Wei, Zhihao Fan, Zengfeng Huang, Weijian Sun, Qi Zhang, Xuanjing Huang
Human evaluation also proves that our model is able to generate relevant and informative questions.
no code implementations • COLING 2022 • Jun Zhao, Xin Zhao, WenYu Zhan, Tao Gui, Qi Zhang, Liang Qiao, Zhanzhan Cheng, ShiLiang Pu
To deal with this problem, this work proposes a cross-document semantic enhancement method, which consists of two modules: 1) To prevent distractions from irrelevant regions in the current document, we design a learnable attention mask mechanism, which is used to adaptively filter redundant information in the current document.
1 code implementation • Findings (EMNLP) 2021 • Qinzhuo Wu, Qi Zhang, Zhongyu Wei
Specifically, an edge-enhanced hierarchical graph encoder is used to incorporate edge label information.
1 code implementation • EMNLP 2021 • Zichu Fei, Qi Zhang, Yaqian Zhou
However, (1) they ignore the rich structure information that is hidden in the previously generated text.
no code implementations • SemEval (NAACL) 2022 • Qi Zhang, Jie zhou, Qin Chen, Qingchun Bai, Jun Xiao, Liang He
The task aims to extract the structured sentiment information (e. g., holder, target, expression and sentiment polarity) in a text.
no code implementations • EMNLP 2020 • Qinzhuo Wu, Qi Zhang, Jinlan Fu, Xuanjing Huang
With the advancements in natural language processing tasks, math word problem solving has received increasing attention.
1 code implementation • COLING 2022 • Yinzi Li, Wei Chen, Zhongyu Wei, Yujun Huang, Chujun Wang, Siyuan Wang, Qi Zhang, Xuanjing Huang, Libo Wu
Existing research for argument representation learning mainly treats tokens in the sentence equally and ignores the implied structure information of argumentative context.
1 code implementation • COLING 2022 • Xin Zhou, Ruotian Ma, Yicheng Zou, Xuanting Chen, Tao Gui, Qi Zhang, Xuanjing Huang, Rui Xie, Wei Wu
Specifically, we re-formulate both token and sentence classification tasks into a unified language modeling task, and map label spaces of different tasks into the same vocabulary space.
no code implementations • 10 May 2024 • Yunqian Fan, Xiuying Wei, Ruihao Gong, Yuqing Ma, Xiangguo Zhang, Qi Zhang, Xianglong Liu
In this paper, we pioneeringly investigate semantic sensitivity to post-processing for lane detection with a novel Lane Distortion Score.
no code implementations • 1 May 2024 • Shihan Dou, Yan Liu, Enyu Zhou, Tianlong Li, Haoxiang Jia, Limao Xiong, Xin Zhao, Junjie Ye, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang
These two issues can be united as a challenge posed by the shifted distribution of the environment.
no code implementations • 27 Apr 2024 • Dapeng Li, Hang Dong, Lu Wang, Bo Qiao, Si Qin, QIngwei Lin, Dongmei Zhang, Qi Zhang, Zhiwei Xu, Bin Zhang, Guoliang Fan
The entire framework has a message module and an action module.
no code implementations • 26 Apr 2024 • Yinghan Cheng, Qi Zhang, Chongyang Shi, Liang Xiao, Shufeng Hao, Liang Hu
To address these challenges, we present a novel collaborative stance detection framework called (CoSD) which leverages contrastive heterogeneous topic graph learning to learn topic-aware semantics and collaborative signals among texts, topics, and stance labels for enhancing stance detection.
no code implementations • 24 Apr 2024 • Qi Zhang, Weihua Xu, Lei Xie, Hongye Su
Electrolytic hydrogen production serves as not only a vital source of green hydrogen but also a key strategy for addressing renewable energy consumption challenges.
no code implementations • 20 Apr 2024 • Guangyin Bao, Zixuan Gong, Qi Zhang, Jialei Zhou, Wei Fan, Kun Yi, Usman Naseem, Liang Hu, Duoqian Miao
We meticulously evaluate the performance of our approach across coarse-grained and fine-grained visual decoding tasks.
no code implementations • 19 Apr 2024 • Zixuan Gong, Qi Zhang, Guangyin Bao, Lei Zhu, Ke Liu, Liang Hu, Duoqian Miao
Decoding natural visual scenes from brain activity has flourished, with extensive research in single-subject tasks and, however, less in cross-subject tasks.
1 code implementation • 18 Apr 2024 • Jie Wang, Tao Ji, Yuanbin Wu, Hang Yan, Tao Gui, Qi Zhang, Xuanjing Huang, Xiaoling Wang
Generalizing to longer sentences is important for recent Transformer-based language models.
no code implementations • 16 Apr 2024 • Xiao Wang, Tianze Chen, Xianjun Yang, Qi Zhang, Xun Zhao, Dahua Lin
The open-sourcing of large language models (LLMs) accelerates application development, innovation, and scientific progress.
no code implementations • 15 Apr 2024 • Qi Zhang, Lei Xie, Weihua Xu, Hongye Su
A novel robust dynamic variational Bayesian dictionary learning (RDVDL) monitoring approach is proposed to improve the reliability and safety of AWE operation.
no code implementations • 15 Apr 2024 • Qi Zhang, Lei Wang, Weihua Xu, Hongye Su, Lei Xie
Variational inference is used by NSVB-MPC to assess the predictive accuracy and make the necessary corrections to quantify system uncertainty.
no code implementations • 10 Apr 2024 • Qi Zhang, Bing Li, Lingzhou Xue
Motivated by modern data forms such as images and multi-view data, the multi-attribute graphical model aims to explore the conditional independence structure among vectors.
no code implementations • 1 Apr 2024 • Qi Zhang, Yi Zhou, Ashley Prater-Bennette, Lixin Shen, Shaofeng Zou
We prove that our algorithm finds an $\epsilon$-stationary point with a computational complexity of $\mathcal O(\epsilon^{-3k_*-5})$, where $k_*$ is the parameter of the Cressie-Read divergence.
1 code implementation • 1 Apr 2024 • wei he, Shichun Liu, Jun Zhao, Yiwen Ding, Yi Lu, Zhiheng Xi, Tao Gui, Qi Zhang, Xuanjing Huang
The generated demos strategically interpolate between existing demos and the given query, transforming the query from OOD to ID.
no code implementations • 1 Apr 2024 • Qi Zhang, Yi Zhou, Shaofeng Zou
Specifically, to solve the challenges due to dependence among adaptive update, unbounded gradient estimate and Lipschitz constant, we demonstrate that the first-order term in the descent lemma converges and its denominator is upper bounded by a function of gradient norm.
no code implementations • 28 Mar 2024 • Qi Zhang, Guang Wang, Li Lin, Kaiwen Xia, Shuai Wang
With the advent of the era of big data, massive information, expert experience, and high-accuracy models bring great opportunities to the information cascade prediction of public emergencies.
no code implementations • 24 Mar 2024 • Rui Zheng, Yuhao Zhou, Zhiheng Xi, Tao Gui, Qi Zhang, Xuanjing Huang
We first empirically show that the features of either clean signals or adversarial perturbations are redundant and span in low-dimensional linear subspaces respectively with minimal overlap, and the classical low-dimensional subspace projection can suppress perturbation features out of the subspace of clean signals.
1 code implementation • 19 Mar 2024 • Yifei Wang, Qi Zhang, Yaoyu Guo, Yisen Wang
In this paper, we propose Non-negative Contrastive Learning (NCL), a renaissance of Non-negative Matrix Factorization (NMF) aimed at deriving interpretable features.
1 code implementation • 18 Mar 2024 • Jun Lei, Yuxi Zhou, Xue Tian, Qinghao Zhao, Qi Zhang, Shijia Geng, Qingbo Wu, Shenda Hong
By employing 150 beats for information fusion decision algorithm, the average AUC can reach 0. 7591.
no code implementations • 18 Mar 2024 • Yujiao Jiang, Qingmin Liao, Xiaoyu Li, Li Ma, Qi Zhang, Chaopeng Zhang, Zongqing Lu, Ying Shan
Therefore, we propose UV Gaussians, which models the 3D human body by jointly learning mesh deformations and 2D UV-space Gaussian textures.
1 code implementation • 18 Mar 2024 • Weikang Zhou, Xiao Wang, Limao Xiong, Han Xia, Yingshuang Gu, Mingxu Chai, Fukang Zhu, Caishuang Huang, Shihan Dou, Zhiheng Xi, Rui Zheng, Songyang Gao, Yicheng Zou, Hang Yan, Yifan Le, Ruohui Wang, Lijun Li, Jing Shao, Tao Gui, Qi Zhang, Xuanjing Huang
This paper introduces EasyJailbreak, a unified framework simplifying the construction and evaluation of jailbreak attacks against LLMs.
no code implementations • 17 Mar 2024 • Zhihao Liang, Qi Zhang, WenBo Hu, Ying Feng, Lei Zhu, Kui Jia
This is because 3DGS treats each pixel as an isolated, single point rather than as an area, causing insensitivity to changes in the footprints of pixels.
no code implementations • 13 Mar 2024 • Sitao Cheng, Ziyuan Zhuang, Yong Xu, Fangkai Yang, Chaoyun Zhang, Xiaoting Qin, Xiang Huang, Ling Chen, QIngwei Lin, Dongmei Zhang, Saravan Rajmohan, Qi Zhang
We instantiate the path on structured environments and provide feedback to edit the path if anything goes wrong.
1 code implementation • 28 Feb 2024 • Shuhua Shi, Shaohan Huang, Minghui Song, Zhoujun Li, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, Qi Zhang
As one of the most popular parameter-efficient fine-tuning (PEFT) methods, low-rank adaptation (LoRA) is commonly applied to fine-tune large language models (LLMs).
no code implementations • 27 Feb 2024 • Kaikai An, Fangkai Yang, Junting Lu, Liqun Li, Zhixing Ren, Hao Huang, Lu Wang, Pu Zhao, Yu Kang, Hua Ding, QIngwei Lin, Saravan Rajmohan, Dongmei Zhang, Qi Zhang
Effective incident management is pivotal for the smooth operation of enterprises-level cloud services.
no code implementations • 27 Feb 2024 • Qi Zhang, Yiming Zhang, Haobo Wang, Junbo Zhao
When it comes to datasets synthesized by LLMs, a common scenario in this field, dirty samples will even be selected with a higher probability than other samples.
no code implementations • 26 Feb 2024 • Yuansen Zhang, Xiao Wang, Zhiheng Xi, Han Xia, Tao Gui, Qi Zhang, Xuanjing Huang
In this paper, drawing inspiration from recent works that LLMs are sensitive to the design of the instructions, we utilize instructions in code style, which are more structural and less ambiguous, to replace typically natural language instructions.
1 code implementation • 26 Feb 2024 • Huijie Lv, Xiao Wang, Yuansen Zhang, Caishuang Huang, Shihan Dou, Junjie Ye, Tao Gui, Qi Zhang, Xuanjing Huang
Adversarial misuse, particularly through `jailbreaking' that circumvents a model's safety and ethical protocols, poses a significant challenge for Large Language Models (LLMs).
no code implementations • 24 Feb 2024 • Yuxuan Liu, Tianchi Yang, Shaohan Huang, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, Qi Zhang
Large language models (LLMs) have emerged as a promising alternative to expensive human evaluations.
no code implementations • 23 Feb 2024 • Kun Yi, Qi Zhang, Hui He, Kaize Shi, Liang Hu, Ning An, Zhendong Niu
Multivariate time series (MTS) forecasting is crucial in many real-world applications.
no code implementations • 22 Feb 2024 • Zhihao Zhang, Jun Zhao, Qi Zhang, Tao Gui, Xuanjing Huang
Furthermore, this core region exhibits significant dimensional dependency, perturbations to even a single parameter on specific dimensions leading to a loss of linguistic competence.
no code implementations • 22 Feb 2024 • Siyin Wang, Jie zhou, Qin Chen, Qi Zhang, Tao Gui, Xuanjing Huang
Domain adaption has been widely adapted for cross-domain sentiment analysis to transfer knowledge from the source domain to the target domain.
no code implementations • 22 Feb 2024 • Junjie Ye, Nuo Xu, Yikun Wang, Jie zhou, Qi Zhang, Tao Gui, Xuanjing Huang
To overcome the limitations of existing data augmentation methods that compromise semantic integrity and address the uncertainty inherent in LLM-generated text, we leverage the distinctive characteristics of the NER task by augmenting the original data at both the contextual and entity levels.
1 code implementation • 22 Feb 2024 • Ningyu Xu, Qi Zhang, Menghan Zhang, Peng Qian, Xuanjing Huang
Here we re-purpose the reverse dictionary task as a case study to probe LLMs' capacity for conceptual inference.
no code implementations • 21 Feb 2024 • Haoyu Liu, Jianfeng Liu, Shaohan Huang, Yuefeng Zhan, Hao Sun, Weiwei Deng, Furu Wei, Qi Zhang
The remarkable capability of large language models (LLMs) for in-context learning (ICL) needs to be activated by demonstration examples.
no code implementations • 19 Feb 2024 • Yuxuan Liu, Tianchi Yang, Shaohan Huang, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, Qi Zhang
Diffusion models have demonstrated exceptional capability in generating high-quality images, videos, and audio.
no code implementations • 18 Feb 2024 • Liang Xiao, Qi Zhang, Chongyang Shi, Shoujin Wang, Usman Naseem, Liang Hu
These existing methods fail to handle the complex, subtle twists in news articles, such as syntax-semantics mismatches and prior biases, leading to lower performance and potential failure when modalities or social context are missing.
1 code implementation • 18 Feb 2024 • Jun Zhao, Can Zu, Hao Xu, Yi Lu, wei he, Yiwen Ding, Tao Gui, Qi Zhang, Xuanjing Huang
Large language models (LLMs) have demonstrated impressive performance in understanding language and executing complex reasoning tasks.
no code implementations • 18 Feb 2024 • Nuo Xu, Jun Zhao, Can Zu, Sixian Li, Lu Chen, Zhihao Zhang, Rui Zheng, Shihan Dou, Wenjuan Qin, Tao Gui, Qi Zhang, Xuanjing Huang
To address this issue, we propose a cost-effective preference learning strategy, optimizing reward models by distinguishing between human and machine translations.
no code implementations • 18 Feb 2024 • Hanshuang Tong, Jun Li, Ning Wu, Ming Gong, Dongmei Zhang, Qi Zhang
Recent advancements in large language models (LLMs) have opened new pathways for many domains.
1 code implementation • 16 Feb 2024 • Junjie Ye, Sixian Li, Guanyu Li, Caishuang Huang, Songyang Gao, Yilong Wu, Qi Zhang, Tao Gui, Xuanjing Huang
Tool learning is widely acknowledged as a foundational approach or deploying large language models (LLMs) in real-world scenarios.
1 code implementation • 16 Feb 2024 • Yi Lu, Xin Zhou, wei he, Jun Zhao, Tao Ji, Tao Gui, Qi Zhang, Xuanjing Huang
Instead of allowing each head to attend to the full sentence, which struggles with generalizing to longer sequences due to out-of-distribution (OOD) issues, we allow each head to process in-distribution length by selecting and attending to important context chunks.
no code implementations • 13 Feb 2024 • Jin Li, Shoujin Wang, Qi Zhang, Longbing Cao, Fang Chen, Xiuzhen Zhang, Dietmar Jannach, Charu C. Aggarwal
However, emerging vulnerabilities in RS have catalyzed a paradigm shift towards Trustworthy RS (TRS).
1 code implementation • 8 Feb 2024 • Chaoyun Zhang, Liqun Li, Shilin He, Xu Zhang, Bo Qiao, Si Qin, Minghua Ma, Yu Kang, QIngwei Lin, Saravan Rajmohan, Dongmei Zhang, Qi Zhang
We introduce UFO, an innovative UI-Focused agent to fulfill user requests tailored to applications on Windows OS, harnessing the capabilities of GPT-Vision.
1 code implementation • 8 Feb 2024 • Zhiheng Xi, Wenxiang Chen, Boyang Hong, Senjie Jin, Rui Zheng, wei he, Yiwen Ding, Shichun Liu, Xin Guo, Junzhe Wang, Honglin Guo, Wei Shen, Xiaoran Fan, Yuhao Zhou, Shihan Dou, Xiao Wang, Xinbo Zhang, Peng Sun, Tao Gui, Qi Zhang, Xuanjing Huang
In this paper, we propose R$^3$: Learning Reasoning through Reverse Curriculum Reinforcement Learning (RL), a novel method that employs only outcome supervision to achieve the benefits of process supervision for large language models.
no code implementations • 4 Feb 2024 • Chong Zhang, Yixi Zhao, Chenshu Yuan, Yi Tu, Ya Guo, Qi Zhang
Therefore, we claim the necessary standards for an ideal benchmark to evaluate the information extraction ability of PTLMs.
no code implementations • 3 Feb 2024 • Ran Miao, Xueyu Chen, Liang Hu, Zhifei Zhang, Minghua Wan, Qi Zhang, Cairong Zhao
Patent documents in the patent database (PatDB) are crucial for research, development, and innovation as they contain valuable technical information.
no code implementations • 3 Feb 2024 • Ruotian Ma, Xiaolei Wang, Xin Zhou, Jian Li, Nan Du, Tao Gui, Qi Zhang, Xuanjing Huang
Despite the success, the underlying mechanism of this approach remains unexplored, and the true effectiveness of LLMs as Prompt Optimizers requires further validation.
no code implementations • 3 Feb 2024 • Jianing He, Qi Zhang, Weiping Ding, Duoqian Miao, Jun Zhao, Liang Hu, Longbing Cao
DE$^3$-BERT implements a hybrid exiting strategy that supplements classic entropy-based local information with distance-based global information to enhance the estimation of prediction correctness for more reliable early exiting decisions.
1 code implementation • 2 Feb 2024 • Shihan Dou, Yan Liu, Haoxiang Jia, Limao Xiong, Enyu Zhou, Wei Shen, Junjie Shan, Caishuang Huang, Xiao Wang, Xiaoran Fan, Zhiheng Xi, Yuhao Zhou, Tao Ji, Rui Zheng, Qi Zhang, Xuanjing Huang, Tao Gui
The advancement of large language models (LLMs) has significantly propelled the field of code generation.
no code implementations • 31 Jan 2024 • Xiaoyu Li, Qi Zhang, Di Kang, Weihao Cheng, Yiming Gao, Jingbo Zhang, Zhihao Liang, Jing Liao, Yan-Pei Cao, Ying Shan
In this survey, we aim to introduce the fundamental methodologies of 3D generation methods and establish a structured roadmap, encompassing 3D representation, generation methods, datasets, and corresponding applications.
no code implementations • 31 Jan 2024 • Chenyu Shi, Xiao Wang, Qiming Ge, Songyang Gao, Xianjun Yang, Tao Gui, Qi Zhang, Xuanjing Huang, Xun Zhao, Dahua Lin
Large language models are meticulously aligned to be both helpful and harmless.
1 code implementation • 30 Jan 2024 • Xiaoran Fan, Tao Ji, Changhao Jiang, Shuo Li, Senjie Jin, Sirui Song, Junke Wang, Boyang Hong, Lu Chen, Guodong Zheng, Ming Zhang, Caishuang Huang, Rui Zheng, Zhiheng Xi, Yuhao Zhou, Shihan Dou, Junjie Ye, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang, Zuxuan Wu, Yu-Gang Jiang
This technique introduces a fusion network to unify the processing of outputs from different visual experts, while bridging the gap between image encoders and pre-trained LLMs.
Ranked #43 on Visual Question Answering on MM-Vet
1 code implementation • 21 Jan 2024 • Songyang Gao, Qiming Ge, Wei Shen, Shihan Dou, Junjie Ye, Xiao Wang, Rui Zheng, Yicheng Zou, Zhi Chen, Hang Yan, Qi Zhang, Dahua Lin
This reliance limits the applicability of RLHF and hinders the development of professional assistants tailored to diverse human preferences.
no code implementations • 19 Jan 2024 • Nan Li, Alexandros Iosifidis, Qi Zhang
To effectively trade-off communication, computation, and inference accuracy, we design a reward function and formulate the offloading problem of CNN inference as a maximization problem with the goal of maximizing the average inference accuracy and throughput over the long term.
1 code implementation • 16 Jan 2024 • Junjie Ye, Yilong Wu, Songyang Gao, Caishuang Huang, Sixian Li, Guanyu Li, Xiaoran Fan, Qi Zhang, Tao Gui, Xuanjing Huang
To bridge this gap, we introduce RoTBench, a multi-level benchmark for evaluating the robustness of LLMs in tool learning.
1 code implementation • 14 Jan 2024 • Ting Jiang, Shaohan Huang, Shengyue Luo, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, Qi Zhang, Deqing Wang, Fuzhen Zhuang
To enhance the domain-specific capabilities of large language models, continued pre-training on a domain-specific corpus is a prevalent method.
no code implementations • 13 Jan 2024 • Lu Wang, Chao Du, Pu Zhao, Chuan Luo, Zhangchi Zhu, Bo Qiao, Wei zhang, QIngwei Lin, Saravan Rajmohan, Dongmei Zhang, Qi Zhang
To correct the negative sampling bias, we propose a novel contrastive learning method named Positive-Unlabeled Contrastive Learning (PUCL).
no code implementations • 13 Jan 2024 • Lu Wang, Mayukh Das, Fangkai Yang, Chao Duo, Bo Qiao, Hang Dong, Si Qin, Chetan Bansal, QIngwei Lin, Saravan Rajmohan, Dongmei Zhang, Qi Zhang
We address the challenge of learning safe and robust decision policies in presence of uncertainty in context of the real scientific problem of adaptive resource oversubscription to enhance resource efficiency while ensuring safety against resource congestion risk.
1 code implementation • 11 Jan 2024 • Binghai Wang, Rui Zheng, Lu Chen, Yan Liu, Shihan Dou, Caishuang Huang, Wei Shen, Senjie Jin, Enyu Zhou, Chenyu Shi, Songyang Gao, Nuo Xu, Yuhao Zhou, Xiaoran Fan, Zhiheng Xi, Jun Zhao, Xiao Wang, Tao Ji, Hang Yan, Lixing Shen, Zhan Chen, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang, Zuxuan Wu, Yu-Gang Jiang
We introduce a series of novel methods to mitigate the influence of incorrect and ambiguous preferences in the dataset and fully leverage high-quality preference data.
no code implementations • 7 Jan 2024 • Zhangkai Wu, Longbing Cao, Qi Zhang, Junxian Zhou, Hui Chen
Due to their unsupervised training and uncertainty estimation, deep Variational Autoencoders (VAEs) have become powerful tools for reconstruction-based Time Series Anomaly Detection (TSAD).
no code implementations • 2 Jan 2024 • Jun Zhao, Zhihao Zhang, Luhui Gao, Qi Zhang, Tao Gui, Xuanjing Huang
In recent times, substantial advancements have been witnessed in large language models (LLMs), exemplified by ChatGPT, showcasing remarkable proficiency across a range of complex tasks.
1 code implementation • 1 Jan 2024 • Junjie Ye, Guanyu Li, Songyang Gao, Caishuang Huang, Yilong Wu, Sixian Li, Xiaoran Fan, Shihan Dou, Qi Zhang, Tao Gui, Xuanjing Huang
Furthermore, a sole emphasis on outcomes disregards the intricate capabilities essential for LLMs to effectively utilize tools.
no code implementations • 21 Dec 2023 • Guangyin Bao, Qi Zhang, Duoqian Miao, Zixuan Gong, Liang Hu, Ke Liu, Yang Liu, Chongyang Shi
In real-world scenarios, multimodal federated learning often faces the practical challenge of intricate modality missing, which poses constraints on building federated frameworks and significantly degrades model inference accuracy.
1 code implementation • 21 Dec 2023 • Jiayu Lin, Rong Ye, Meng Han, Qi Zhang, Ruofei Lai, Xinyu Zhang, Zhao Cao, Xuanjing Huang, Zhongyu Wei
The results show the competitiveness of our proposed framework and evaluator in counter-argument generation tasks.
1 code implementation • 18 Dec 2023 • An Lao, Qi Zhang, Chongyang Shi, Longbing Cao, Kun Yi, Liang Hu, Duoqian Miao
Multimodal content, such as mixing text with images, presents significant challenges to rumor detection in social media.
no code implementations • 16 Dec 2023 • Jingyi Zhou, Jie zhou, Jiabao Zhao, Siyin Wang, Haijun Shan, Gui Tao, Qi Zhang, Xuanjing Huang
Few-shot text classification has attracted great interest in both academia and industry due to the lack of labeled data in many fields.
1 code implementation • 15 Dec 2023 • Shihan Dou, Enyu Zhou, Yan Liu, Songyang Gao, Jun Zhao, Wei Shen, Yuhao Zhou, Zhiheng Xi, Xiao Wang, Xiaoran Fan, ShiLiang Pu, Jiang Zhu, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang
Supervised fine-tuning (SFT) is a crucial step for large language models (LLMs), enabling them to align with human instructions and enhance their capabilities in downstream tasks.
no code implementations • 12 Dec 2023 • Shaopeng Zhai, Jie Wang, Tianyi Zhang, Fuxian Huang, Qi Zhang, Ming Zhou, Jing Hou, Yu Qiao, Yu Liu
Building embodied agents on integrating Large Language Models (LLMs) and Reinforcement Learning (RL) have revolutionized human-AI interaction: researchers can now leverage language instructions to plan decision-making for open-ended tasks.
no code implementations • 12 Dec 2023 • Yue Zhang, Ming Zhang, Haipeng Yuan, Shichun Liu, Yongyao Shi, Tao Gui, Qi Zhang, Xuanjing Huang
The three crucial questions for LLM evaluation are ``what, where, and how to evaluate''.
no code implementations • 6 Dec 2023 • Zixuan Gong, Qi Zhang, Guangyin Bao, Lei Zhu, Yu Zhang, Ke Liu, Liang Hu, Duoqian Miao
The limited data availability and the low signal-to-noise ratio of fMRI signals lead to the challenging task of fMRI-to-image retrieval.
no code implementations • 5 Dec 2023 • Zhen Liu, Hao Zhu, Qi Zhang, Jingde Fu, Weibing Deng, Zhan Ma, Yanwen Guo, Xun Cao
Implicit Neural Representation (INR), which utilizes a neural network to map coordinate inputs to corresponding attributes, is causing a revolution in the field of signal processing.
1 code implementation • 1 Dec 2023 • Jingcong Liang, Rong Ye, Meng Han, Qi Zhang, Ruofei Lai, Xinyu Zhang, Zhao Cao, Xuanjing Huang, Zhongyu Wei
In this paper, we propose the Hierarchical Argumentation Graph (Hi-ArG), a new structure to organize arguments.
no code implementations • 28 Nov 2023 • Jingbo Zhang, Xiaoyu Li, Qi Zhang, YanPei Cao, Ying Shan, Jing Liao
Optimization-based methods that lift text-to-image diffusion models to 3D generation often fail to preserve the texture details of the reference image, resulting in inconsistent appearances in different views.
no code implementations • 28 Nov 2023 • Xiangjun Gao, Xiaoyu Li, Chaopeng Zhang, Qi Zhang, YanPei Cao, Ying Shan, Long Quan
In this work, we propose a method to address the challenge of rendering a 3D human from a single image in a free-view manner.
1 code implementation • 26 Nov 2023 • Zhihao Liang, Qi Zhang, Ying Feng, Ying Shan, Kui Jia
We propose GS-IR, a novel inverse rendering approach based on 3D Gaussian Splatting (GS) that leverages forward mapping volume rendering to achieve photorealistic novel view synthesis and relighting results.
no code implementations • 15 Nov 2023 • Yikun Wang, Rui Zheng, Haoming Li, Qi Zhang, Tao Gui, Fei Liu
This method trains the model to prioritize the best responses from a pool of candidates created for a particular task.
1 code implementation • 15 Nov 2023 • Yunqin Zhu, Chao Wang, Qi Zhang, Hui Xiong
In this paper, we adapt standard diffusion model and propose a novel Graph Signal Diffusion Model for Collaborative Filtering (named GiffCF).
2 code implementations • NeurIPS 2023 • Kun Yi, Qi Zhang, Wei Fan, Shoujin Wang, Pengyang Wang, Hui He, Defu Lian, Ning An, Longbing Cao, Zhendong Niu
FreTS mainly involves two stages, (i) Domain Conversion, that transforms time-domain signals into complex numbers of frequency domain; (ii) Frequency Learning, that performs our redesigned MLPs for the learning of real and imaginary part of frequency components.
no code implementations • 2 Nov 2023 • Xin Zhou, Yi Lu, Ruotian Ma, Tao Gui, Qi Zhang, Xuanjing Huang
Specifically, we introduce ``security vectors'', a few new parameters that can be separated from the LLM, to ensure LLM's responses are consistent with the harmful behavior.
no code implementations • 23 Oct 2023 • Jun Zhao, Zhihao Zhang, Yide Ma, Qi Zhang, Tao Gui, Luhui Gao, Xuanjing Huang
We have discovered a core region in LLMs that corresponds to linguistic competence, accounting for approximately 1% of the total model parameters.
1 code implementation • 22 Oct 2023 • Xiao Wang, Tianze Chen, Qiming Ge, Han Xia, Rong Bao, Rui Zheng, Qi Zhang, Tao Gui, Xuanjing Huang
In this paper, we propose orthogonal low-rank adaptation (O-LoRA), a simple and efficient approach for continual learning in language models, effectively mitigating catastrophic forgetting while learning new tasks.
1 code implementation • 20 Oct 2023 • Zhaoyang Wang, Shaohan Huang, Yuxuan Liu, Jiahai Wang, Minghui Song, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, Qi Zhang
In this paper, we propose a tailored learning approach to distill such reasoning ability to smaller LMs to facilitate the democratization of the exclusive reasoning ability.
no code implementations • 19 Oct 2023 • Tianchi Yang, Minghui Song, Zihan Zhang, Haizhen Huang, Weiwei Deng, Feng Sun, Qi Zhang
Generative retrieval, which is a new advanced paradigm for document retrieval, has recently attracted research interests, since it encodes all documents into the model and directly generates the retrieved documents.
1 code implementation • 19 Oct 2023 • Ningyu Xu, Qi Zhang, Jingting Ye, Menghan Zhang, Xuanjing Huang
We then propose a meta-learning-based method to learn to align conceptual spaces of different languages, which facilitates zero-shot and few-shot generalization in concept classification and also offers insights into the cross-lingual in-context learning phenomenon.
no code implementations • 18 Oct 2023 • Rui Zheng, Wei Shen, Yuan Hua, Wenbin Lai, Shihan Dou, Yuhao Zhou, Zhiheng Xi, Xiao Wang, Haoran Huang, Tao Gui, Qi Zhang, Xuanjing Huang
In this work, we propose a novel approach that can learn a consistent policy via RL across various data groups or domains.
1 code implementation • 17 Oct 2023 • Chong Zhang, Ya Guo, Yi Tu, Huan Chen, Jinyang Tang, Huijia Zhu, Qi Zhang, Tao Gui
However, BIO-tagging scheme relies on the correct order of model inputs, which is not guaranteed in real-world NER on scanned VrDs where text are recognized and arranged by OCR systems.
Ranked #1 on Entity Linking on FUNSD
no code implementations • 17 Oct 2023 • Enyu Zhou, Rui Zheng, Zhiheng Xi, Songyang Gao, Xiaoran Fan, Zichu Fei, Jingting Ye, Tao Gui, Qi Zhang, Xuanjing Huang
Reports of human-like behaviors in foundation models are growing, with psychological theories providing enduring tools to investigate these behaviors.
1 code implementation • 16 Oct 2023 • Chunwei Tian, Xuanyu Zhang, Qi Zhang, Mingming Yang, Zhaojie Ju
In this paper, we present a dynamic network for image super-resolution (DSRNet), which contains a residual enhancement block, wide enhancement block, feature refinement block and construction block.
Ranked #49 on Image Super-Resolution on Set14 - 4x upscaling
1 code implementation • 14 Oct 2023 • Junjie Ye, Jie zhou, Junfeng Tian, Rui Wang, Qi Zhang, Tao Gui, Xuanjing Huang
Recently, Target-oriented Multimodal Sentiment Classification (TMSC) has gained significant attention among scholars.
1 code implementation • 10 Oct 2023 • Xiao Wang, Yuansen Zhang, Tianze Chen, Songyang Gao, Senjie Jin, Xianjun Yang, Zhiheng Xi, Rui Zheng, Yicheng Zou, Tao Gui, Qi Zhang, Xuanjing Huang
In this paper, we introduce TRACE, a novel benchmark designed to evaluate continual learning in LLMs.
1 code implementation • 9 Oct 2023 • Bolin Zhu, Xiaoze Liu, Xin Mao, Zhuo Chen, Lingbing Guo, Tao Gui, Qi Zhang
The objective of Entity Alignment (EA) is to identify equivalent entity pairs from multiple Knowledge Graphs (KGs) and create a more comprehensive and unified KG.
no code implementations • 8 Oct 2023 • Wei Shen, Rui Zheng, WenYu Zhan, Jun Zhao, Shihan Dou, Tao Gui, Qi Zhang, Xuanjing Huang
Reinforcement learning from human feedback serves as a crucial bridge, aligning large language models with human and societal values.
no code implementations • 4 Oct 2023 • Xianjun Yang, Xiao Wang, Qi Zhang, Linda Petzold, William Yang Wang, Xun Zhao, Dahua Lin
This study serves as a clarion call for a collective effort to overhaul and fortify the safety of open-source LLMs against malicious attackers.
no code implementations • 4 Oct 2023 • Hao Chen, Qi Zhang, Zenan Huang, Haobo Wang, Junbo Zhao
Distributional shift between domains poses great challenges to modern machine learning algorithms.
no code implementations • 2 Oct 2023 • Xin Huang, Ruizhi Shao, Qi Zhang, Hongwen Zhang, Ying Feng, Yebin Liu, Qing Wang
The main idea is to enhance the model's 2D perception of 3D geometry by learning a normal-adapted diffusion model and a normal-aligned diffusion model.
1 code implementation • NeurIPS 2023 • Hailin Zhang, Yujing Wang, Qi Chen, Ruiheng Chang, Ting Zhang, Ziming Miao, Yingyan Hou, Yang Ding, Xupeng Miao, Haonan Wang, Bochen Pang, Yuefeng Zhan, Hao Sun, Weiwei Deng, Qi Zhang, Fan Yang, Xing Xie, Mao Yang, Bin Cui
We empirically show that our model achieves better performance on the commonly used academic benchmarks MSMARCO Passage and Natural Questions, with comparable serving latency to dense retrieval solutions.
no code implementations • 23 Sep 2023 • Yuxuan Liu, Tianchi Yang, Shaohan Huang, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, Qi Zhang
Recent advancements in large language models (LLMs) on language modeling and emergent capabilities make them a promising reference-free evaluator of natural language generation quality, and a competent alternative to human evaluation.
no code implementations • 22 Sep 2023 • Hao Zhu, Fengyi Liu, Qi Zhang, Xun Cao, Zhan Ma
This connection ensures a seamless backpropagation of gradients from the network's output back to the input coordinates, thereby enhancing regularization.
no code implementations • 22 Sep 2023 • Nishtha Mahajan, Qi Zhang
In this work, we use the communication of intent as a means to facilitate cooperation between autonomous vehicle agents.
no code implementations • 19 Sep 2023 • Yiyu Zhuang, Qi Zhang, Ying Feng, Hao Zhu, Yao Yao, Xiaoyu Li, Yan-Pei Cao, Ying Shan, Xun Cao
Drawing inspiration from voxel-based representations with the level of detail (LoD), we introduce a multi-scale tri-plane-based scene representation that is capable of capturing the LoD of the signed distance function (SDF) and the space radiance.
no code implementations • 17 Sep 2023 • Xiangrui Su, Qi Zhang, Chongyang Shi, Jiachang Liu, Liang Hu
Existing VQA methods integrate vision modeling and language understanding to explore the deep semantics of the question.
no code implementations • DLP@RecSys 2023 • Qi Zhang, Chuhan Wu, Jieming Zhu, Jingjie Li, Qinglin Jia, Ruiming Tang, Rui Zhang, Liangbi Li
We then select them in a domain-aware way to promote informative features for different domains.
1 code implementation • 15 Sep 2023 • Shiyi Zhu, Jing Ye, Wei Jiang, Siqiao Xue, Qi Zhang, Yifan Wu, Jianguo Li
In fact, anomalous behaviors harming long context extrapolation exist between Rotary Position Embedding (RoPE) and vanilla self-attention unveiled by our work.
1 code implementation • 14 Sep 2023 • Zhiheng Xi, Wenxiang Chen, Xin Guo, wei he, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui
Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks.
no code implementations • 11 Sep 2023 • Yukai Miao, Yu Bai, Li Chen, Dan Li, Haifeng Sun, Xizheng Wang, Ziqiu Luo, Yanyu Ren, Dapeng Sun, Xiuting Xu, Qi Zhang, Chao Xiang, Xinchi Li
Nowadays, the versatile capabilities of Pre-trained Large Language Models (LLMs) have attracted much attention from the industry.
no code implementations • 10 Sep 2023 • Qi Zhang, Jiang Zhu, Fengzhong Qu, De Wen Soh
To overcome this fundamental bottleneck, we propose a one-bit-aided (1bit-aided) modulo sampling scheme for direction-of-arrival (DOA) estimation.
no code implementations • 1 Sep 2023 • Xin Li, Wenqing Chu, Ye Wu, Weihang Yuan, Fanglong Liu, Qi Zhang, Fu Li, Haocheng Feng, Errui Ding, Jingdong Wang
In this paper, we present VideoGen, a text-to-video generation approach, which can generate a high-definition video with high frame fidelity and strong temporal consistency using reference-guided latent diffusion.
1 code implementation • 23 Aug 2023 • Rishabh Gupta, Qi Zhang
We introduce the concept of decision-focused surrogate modeling for solving computationally challenging nonlinear optimization problems in real-time settings.
1 code implementation • 23 Aug 2023 • Dingyang Chen, Qi Zhang
Identification and analysis of symmetrical patterns in the natural world have led to significant discoveries across various scientific fields, such as the formulation of gravitational laws in physics and advancements in the study of chemical structures.
no code implementations • 20 Jul 2023 • Qi Zhang, Sipeng Zheng, Qin Jin
Temporal video grounding (TVG) aims to retrieve the time interval of a language query from an untrimmed video.
1 code implementation • 11 Jul 2023 • Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model.
no code implementations • 3 Jul 2023 • Chuan Qin, Le Zhang, Yihang Cheng, Rui Zha, Dazhong Shen, Qi Zhang, Xi Chen, Ying Sun, Chen Zhu, HengShu Zhu, Hui Xiong
To this end, we present an up-to-date and comprehensive survey on AI technologies used for talent analytics in the field of human resource management.
1 code implementation • 27 Jun 2023 • Songyang Gao, Shihan Dou, Yan Liu, Xiao Wang, Qi Zhang, Zhongyu Wei, Jin Ma, Ying Shan
Adversarial training is one of the best-performing methods in improving the robustness of deep language models.
1 code implementation • 27 Jun 2023 • Songyang Gao, Shihan Dou, Qi Zhang, Xuanjing Huang, Jin Ma, Ying Shan
Detecting adversarial samples that are carefully crafted to fool the model is a critical step to socially-secure applications.
1 code implementation • 26 Jun 2023 • Junyan Li, Li Lyna Zhang, Jiahang Xu, Yujing Wang, Shaoguang Yan, Yunqing Xia, Yuqing Yang, Ting Cao, Hao Sun, Weiwei Deng, Qi Zhang, Mao Yang
Deploying pre-trained transformer models like BERT on downstream tasks in resource-constrained scenarios is challenging due to their high inference cost, which grows rapidly with input sequence length.
no code implementations • 20 Jun 2023 • Huiguo He, Tianfu Wang, Huan Yang, Jianlong Fu, Nicholas Jing Yuan, Jian Yin, Hongyang Chao, Qi Zhang
The proposed framework consists of a large language model (LLM), a diffusion-based image generator, and a series of visual rewards by design.
no code implementations • 17 Jun 2023 • Yuxia Liu, Qi Zhang, Wei Xiao, Tianguang Chu
We propose a successive one-sided Hodrick-Prescott (SOHP) filter from multiple time scale decomposition perspective to derive trend estimate for a time series.
no code implementations • 16 Jun 2023 • Kaushik Roy, Yuxin Zi, Manas Gaur, Jinendra Malekar, Qi Zhang, Vignesh Narayanan, Amit Sheth
In this study, we introduce Process Knowledge-infused Learning (PK-iL), a new learning paradigm that layers clinical process knowledge structures on language model outputs, enabling clinician-friendly explanations of the underlying language model predictions.
Explainable Artificial Intelligence (XAI) Language Modelling
no code implementations • 8 Jun 2023 • Jun Zhao, Yongxin Zhang, Qi Zhang, Tao Gui, Zhongyu Wei, Minlong Peng, Mingming Sun
The key to the setting is selecting which instances to label.
1 code implementation • 8 Jun 2023 • Jun Zhao, WenYu Zhan, Xin Zhao, Qi Zhang, Tao Gui, Zhongyu Wei, Junzhe Wang, Minlong Peng, Mingming Sun
However, general matching methods lack explicit modeling of the above matching pattern.
1 code implementation • 8 Jun 2023 • Jun Zhao, Xin Zhao, WenYu Zhan, Qi Zhang, Tao Gui, Zhongyu Wei, Yunwen Chen, Xiang Gao, Xuanjing Huang
Inspired by text adversarial attacks, we adaptively apply small but critical perturbations to original training instances and thus synthesizing negative instances that are more likely to be mistaken by the model as known relations.
1 code implementation • 7 Jun 2023 • Qi Zhang, Yifei Wang, Yisen Wang
Multi-modal contrastive learning (MMCL) has recently garnered considerable interest due to its superior performance in visual tasks, achieved by embedding multi-modal data, such as visual-language pairs.
1 code implementation • 2 Jun 2023 • Dingyang Chen, Qi Zhang
Executing actions in a correlated manner is a common strategy for human coordination that often leads to better cooperation, which is also potentially beneficial for cooperative multi-agent reinforcement learning (MARL).
no code implementations • 31 May 2023 • Yan Wang, Feng Shu, Zhihong Zhuang, Rongen Dong, Qi Zhang, Di wu, Liang Yang, Jiangzhou Wang
Numerical simulation results show that a 3-bit discrete phase shifter is required to achieve a trivial performance loss for a large-scale active IRS.
1 code implementation • 27 May 2023 • Yi Liu, Yuan Tian, Jianxun Lian, Xinlong Wang, Yanan Cao, Fang Fang, Wen Zhang, Haizhen Huang, Denvy Deng, Qi Zhang
Aiming at learning entity representations that can match divergent mentions, this paper proposes a Multi-View Enhanced Distillation (MVD) framework, which can effectively transfer knowledge of multiple fine-grained and mention-relevant parts within entities from cross-encoders to dual-encoders.
1 code implementation • 23 May 2023 • Zhiheng Xi, Senjie Jin, Yuhao Zhou, Rui Zheng, Songyang Gao, Tao Gui, Qi Zhang, Xuanjing Huang
To enhance the multi-step reasoning capabilities of large language models, researchers have extensively explored prompting methods, notably the Chain-of-Thought (CoT) method which explicitly elicits human-like rationales.
1 code implementation • 23 May 2023 • Rui Li, Xu Chen, Chaozhuo Li, Yanming Shen, Jianan Zhao, Yujing Wang, Weihao Han, Hao Sun, Weiwei Deng, Qi Zhang, Xing Xie
Embedding models have shown great power in knowledge graph completion (KGC) task.
1 code implementation • 23 May 2023 • Siyuan Wang, Zhongyu Wei, Meng Han, Zhihao Fan, Haijun Shan, Qi Zhang, Xuanjing Huang
The results demonstrate the effectiveness of our method on logical reasoning over KGs in both inductive and transductive settings.
no code implementations • 23 May 2023 • Jiachang Liu, Qi Zhang, Chongyang Shi, Usman Naseem, Shoujin Wang, Ivor Tsang
Abstractive related work generation has attracted increasing attention in generating coherent related work that better helps readers grasp the background in the current research.
1 code implementation • 22 May 2023 • Xiao Wang, Weikang Zhou, Qi Zhang, Jie zhou, Songyang Gao, Junzhe Wang, Menghan Zhang, Xiang Gao, Yunwen Chen, Tao Gui
Pretrained language models have achieved remarkable success in various natural language processing tasks.
no code implementations • 22 May 2023 • Nan Li, Mehdi Bennis, Alexandros Iosifidis, Qi Zhang
This paper studies the computational offloading of video action recognition in edge computing.
1 code implementation • 21 May 2023 • Limao Xiong, Jie zhou, Qunxi Zhu, Xiao Wang, Yuanbin Wu, Qi Zhang, Tao Gui, Xuanjing Huang, Jin Ma, Ying Shan
Particularly, we propose a Confidence-based Partial Label Learning (CPLL) method to integrate the prior confidence (given by annotators) and posterior confidences (learned by models) for crowd-annotated NER.
1 code implementation • 20 May 2023 • Ting Wu, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang
Models trained with empirical risk minimization (ERM) are revealed to easily rely on spurious correlations, resulting in poor generalization.
1 code implementation • 20 May 2023 • Zihao Yue, Qi Zhang, Anwen Hu, Liang Zhang, Ziheng Wang, Qin Jin
Closer to real scenarios, the Movie Clip Narrating (MCN) task in our benchmark asks models to generate role-aware narration paragraphs for complete movie clips where no actors are speaking.
no code implementations • 19 May 2023 • Huitong Pan, Qi Zhang, Eduard Dragut, Cornelia Caragea, Longin Jan Latecki
We use DMDD to establish baseline performance for dataset mention detection and linking.
1 code implementation • 16 May 2023 • Ziheng Li, Shaohan Huang, Zihan Zhang, Zhi-Hong Deng, Qiang Lou, Haizhen Huang, Jian Jiao, Furu Wei, Weiwei Deng, Qi Zhang
Recent studies have shown that dual encoder models trained with the sentence-level translation ranking task are effective methods for cross-lingual sentence embedding.
no code implementations • 16 May 2023 • Hao Chen, Yiming Zhang, Qi Zhang, Hantao Yang, Xiaomeng Hu, Xuetao Ma, Yifan Yanggong, Junbo Zhao
Instruction tuning for large language models (LLMs) has gained attention from researchers due to its ability to unlock the potential of LLMs in following instructions.
no code implementations • 16 May 2023 • Arian Bakhtiarnia, Qi Zhang, Alexandros Iosifidis
The increasing prevalence of gigapixel resolutions has presented new challenges for crowd counting.
no code implementations • 11 May 2023 • Ting Wu, Jingyi Liu, Rui Zheng, Qi Zhang, Tao Gui, Xuanjing Huang
The principle of continual relation extraction~(CRE) involves adapting to emerging novel relations while preserving od knowledge.
no code implementations • 6 May 2023 • Beiduo Chen, Shaohan Huang, Zihan Zhang, Wu Guo, ZhenHua Ling, Haizhen Huang, Furu Wei, Weiwei Deng, Qi Zhang
Besides, two self-correction courses are proposed to bridge the chasm between the two encoders by creating a "correction notebook" for secondary-supervision.
no code implementations • 4 May 2023 • Songyang Gao, Shihan Dou, Junjie Shan, Qi Zhang, Xuanjing Huang
Dataset bias, i. e., the over-reliance on dataset-specific literal heuristics, is getting increasing attention for its detrimental effect on the generalization ability of NLU models.
no code implementations • 4 May 2023 • Guoqing Yang, Fuyou Xue, Qi Zhang, Ke Xie, Chi-Wing Fu, Hui Huang
Besides, we propose B-Seg, a building instance segmentation method to establish UrbanBIS.
no code implementations • 27 Apr 2023 • Qi Zhang, Yayi Yang, Chongyang Shi, An Lao, Liang Hu, Shoujin Wang, Usman Naseem
Accordingly, we propose a novel rumor detection model with hierarchical representation on the bipartite adhoc event trees called BAET.
no code implementations • CVPR 2023 • Xin Huang, Qi Zhang, Ying Feng, Hongdong Li, Qing Wang
In principle, our new implicit neural camera model has the potential to benefit a wide array of other inverse imaging tasks.
no code implementations • CVPR 2023 • Xin Huang, Qi Zhang, Ying Feng, Xiaoyu Li, Xuan Wang, Qing Wang
To solve this problem, we propose LIRF to aggregate the information from conical frustums to construct a ray.
no code implementations • 18 Apr 2023 • Yiyu Zhuang, Qi Zhang, Xuan Wang, Hao Zhu, Ying Feng, Xiaoyu Li, Ying Shan, Xun Cao
Recent advances in implicit neural representation have demonstrated the ability to recover detailed geometry and material from multi-view images.
1 code implementation • 17 Apr 2023 • Xiao Wang, Weikang Zhou, Can Zu, Han Xia, Tianze Chen, Yuansen Zhang, Rui Zheng, Junjie Ye, Qi Zhang, Tao Gui, Jihua Kang, Jingsheng Yang, Siyuan Li, Chunsai Du
Large language models have unlocked strong multi-task capabilities from reading instructive prompts.
Ranked #2 on Zero-shot Named Entity Recognition (NER) on CrossNER (using extra training data)
no code implementations • 14 Apr 2023 • Jaime Spencer, C. Stella Qian, Michaela Trescakova, Chris Russell, Simon Hadfield, Erich W. Graf, Wendy J. Adams, Andrew J. Schofield, James Elder, Richard Bowden, Ali Anwar, Hao Chen, Xiaozhi Chen, Kai Cheng, Yuchao Dai, Huynh Thai Hoa, Sadat Hossain, Jianmian Huang, Mohan Jing, Bo Li, Chao Li, Baojun Li, Zhiwen Liu, Stefano Mattoccia, Siegfried Mercelis, Myungwoo Nam, Matteo Poggi, Xiaohua Qi, Jiahui Ren, Yang Tang, Fabio Tosi, Linh Trinh, S. M. Nadim Uddin, Khan Muhammad Umair, Kaixuan Wang, YuFei Wang, Yixing Wang, Mochu Xiang, Guangkai Xu, Wei Yin, Jun Yu, Qi Zhang, Chaoqiang Zhao
This paper discusses the results for the second edition of the Monocular Depth Estimation Challenge (MDEC).
no code implementations • 3 Apr 2023 • Hao Zhu, Shaowen Xie, Zhen Liu, Fengyi Liu, Qi Zhang, You Zhou, Yi Lin, Zhan Ma, Xun Cao
However, the expressive power of INR is limited by the spectral bias in the network training.
no code implementations • 18 Mar 2023 • Junjie Ye, Xuanting Chen, Nuo Xu, Can Zu, Zekai Shao, Shichun Liu, Yuhan Cui, Zeyang Zhou, Chao Gong, Yang shen, Jie zhou, Siming Chen, Tao Gui, Qi Zhang, Xuanjing Huang
GPT series models, such as GPT-3, CodeX, InstructGPT, ChatGPT, and so on, have gained considerable attention due to their exceptional natural language processing capabilities.
1 code implementation • 17 Mar 2023 • Yidan Zhang, Ting Zhang, Dong Chen, Yujing Wang, Qi Chen, Xing Xie, Hao Sun, Weiwei Deng, Qi Zhang, Fan Yang, Mao Yang, Qingmin Liao, Baining Guo
While generative modeling has been ubiquitous in natural language processing and computer vision, its application to image retrieval remains unexplored.
1 code implementation • 15 Mar 2023 • Daixuan Cheng, Shaohan Huang, Junyu Bi, Yuefeng Zhan, Jianfeng Liu, Yujing Wang, Hao Sun, Furu Wei, Denvy Deng, Qi Zhang
Large Language Models (LLMs) are popular for their impressive abilities, but the need for model-specific fine-tuning or task-specific prompt engineering can hinder their generalization.
1 code implementation • 8 Mar 2023 • Yifei Wang, Qi Zhang, Tianqi Du, Jiansheng Yang, Zhouchen Lin, Yisen Wang
In recent years, contrastive learning achieves impressive results on self-supervised visual representation learning, but there still lacks a rigorous understanding of its learning dynamics.
no code implementations • 3 Mar 2023 • Qi Zhang, Siyuan Gou, Wenbin Li
The recent surge in interest in autonomous driving stems from its rapidly developing capacity to enhance safety, efficiency, and convenience.
no code implementations • 1 Mar 2023 • Xuanting Chen, Junjie Ye, Can Zu, Nuo Xu, Rui Zheng, Minlong Peng, Jie zhou, Tao Gui, Qi Zhang, Xuanjing Huang
The GPT-3. 5 models have demonstrated impressive performance in various Natural Language Processing (NLP) tasks, showcasing their strong understanding and reasoning capabilities.
Natural Language Inference Natural Language Understanding +1
no code implementations • 19 Feb 2023 • Zhen Guo, Qi Zhang, Xinwei An, Qisheng Zhang, Audun Jøsang, Lance M. Kaplan, Feng Chen, Dong H. Jeong, Jin-Hee Cho
Distinguishing the types of fake news spreaders based on their intent is critical because it will effectively guide how to intervene to mitigate the spread of fake news with different approaches.
no code implementations • 14 Feb 2023 • Qi Zhang, Zijian Yang, Yilun Huang, Ze Chen, Zijian Cai, Kangxu Wang, Jiewen Zheng, Jiarong He, Jin Gao
In this paper, we present our solution to the Multilingual Information Retrieval Across a Continuum of Languages (MIRACL) challenge of WSDM CUP 2023\footnote{https://project-miracl. github. io/}.
no code implementations • 10 Feb 2023 • Ben Chen, Caihua Xiong, Qi Zhang
Aiming to improve the checkerboard corner detection robustness against the images with poor quality, such as lens distortion, extreme poses, and noise, we propose a novel detection algorithm which can maintain high accuracy on inputs under multiply scenarios without any prior knowledge of the checkerboard pattern.
no code implementations • 4 Feb 2023 • Kun Yi, Qi Zhang, Longbing Cao, Shoujin Wang, Guodong Long, Liang Hu, Hui He, Zhendong Niu, Wei Fan, Hui Xiong
Despite the growing attention and the proliferation of research in this emerging field, there is currently a lack of a systematic review and in-depth analysis of deep learning-based time series models with FT.
no code implementations • 31 Jan 2023 • Quanda Zhang, Qi Zhang
This paper discusses the application of artificial intelligence (AI) technology in optical communication networks and 5G.
no code implementations • 30 Jan 2023 • Arian Bakhtiarnia, Qi Zhang, Alexandros Iosifidis
In this paper, we introduce PromptMix, a method for artificially boosting the size of existing datasets, that can be used to improve the performance of lightweight networks.
no code implementations • 28 Jan 2023 • Luyu Jiang, Dantong Ouyang, Qi Zhang, Liming Zhang
Local search is an effective method for solving large-scale combinatorial optimization problems, and it has made remarkable progress in recent years through several subtle mechanisms.
1 code implementation • 27 Jan 2023 • Hui He, Qi Zhang, Shoujin Wang, Kun Yi, Zhendong Niu, Longbing Cao
To bridge such significant gap, we formulate the fairness modeling problem as learning informative representations attending to both advantaged and disadvantaged variables.
no code implementations • ICCV 2023 • Junyu Bi, Daixuan Cheng, Ping Yao, Bochen Pang, Yuefeng Zhan, Chuanguang Yang, Yujing Wang, Hao Sun, Weiwei Deng, Qi Zhang
Vision-Language Pretraining (VLP) has significantly improved the performance of various vision-language tasks with the matching of images and texts.
no code implementations • CVPR 2023 • Qi Zhang, Hongdong Li, Qing Wang
Despite the proliferation of ultra wide-angle lenses on smartphone cameras, such lenses often come with severe image distortion (e. g. curved linear structure, unnaturally skewed faces).
no code implementations • ICCV 2023 • Jiang-Tian Zhai, Qi Zhang, Tong Wu, Xing-Yu Chen, Jiang-Jiang Liu, Ming-Ming Cheng
By aggregating vision-language information, the region filter selects key regions and the region adaptor updates their coordinates with text guidance.
1 code implementation • 21 Dec 2022 • Ningyu Xu, Tao Gui, Ruotian Ma, Qi Zhang, Jingting Ye, Menghan Zhang, Xuanjing Huang
We demonstrate that the distance between the distributions of different languages is highly consistent with the syntactic difference in terms of linguistic formalisms.
no code implementations • 28 Nov 2022 • Jiang-Tian Zhai, Qi Zhang, Tong Wu, Xing-Yu Chen, Jiang-Jiang Liu, Bo Ren, Ming-Ming Cheng
By aggregating cross-modal information, the region filter selects key regions and the region adaptor updates their coordinates with text guidance.
no code implementations • CVPR 2023 • Zhian Liu, Maomao Li, Yong Zhang, Cairong Wang, Qi Zhang, Jue Wang, Yongwei Nie
We rethink face swapping from the perspective of fine-grained face editing, \textit{i. e., ``editing for swapping'' (E4S)}, and propose a framework that is based on the explicit disentanglement of the shape and texture of facial components.
no code implementations • 24 Nov 2022 • Andrea Cavagna, Nan Li, Alexandros Iosifidis, Qi Zhang
The proposed Edge Intelligence framework consists of the proposed effectiveness encoding and effectiveness decoding.
no code implementations • 24 Nov 2022 • Nan Li, Alexandros Iosifidis, Qi Zhang
We design a feature compression module based on the channel attention method in CNN, to compress the intermediate data by selecting the most important features.
no code implementations • 24 Nov 2022 • Zhongtian Dong, Nan Li, Alexandros Iosifidis, Qi Zhang
It is shown that the model selection with distributed inference HALP can significantly improve service reliability compared to the conventional stand-alone computation.
no code implementations • CVPR 2023 • Yue Chen, Xingyu Chen, Xuan Wang, Qi Zhang, Yu Guo, Ying Shan, Fei Wang
Neural Radiance Fields (NeRF) have achieved photorealistic novel views synthesis; however, the requirement of accurate camera poses limits its application.
no code implementations • CVPR 2023 • Shaowen Xie, Hao Zhu, Zhen Liu, Qi Zhang, You Zhou, Xun Cao, Zhan Ma
Implicit neural representation (INR) characterizes the attributes of a signal as a function of corresponding coordinates which emerges as a sharp weapon for solving inverse problems.
1 code implementation • 14 Nov 2022 • Yicheng Zou, Kaitao Song, Xu Tan, Zhongkai Fu, Qi Zhang, Dongsheng Li, Tao Gui
By analyzing this dataset, we find that a large improvement in summarization quality can be achieved by providing ground-truth omission labels for the summarization model to recover omission information, which demonstrates the importance of omission detection for omission mitigation in dialogue summarization.
1 code implementation • 14 Nov 2022 • Zhiheng Xi, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang
Adversarial training is one of the most powerful methods to improve the robustness of pre-trained language models (PLMs).
1 code implementation • 13 Nov 2022 • Qi Zhang, Shanshe Wang, Xinfeng Zhang, Chuanmin Jia, Zhao Wang, Siwei Ma, Wen Gao
Each score is derived from machine perceptual differences between original and compressed images.
2 code implementations • ACL 2022 • Rui Zheng, Rong Bao, Yuhao Zhou, Di Liang, Sirui Wang, Wei Wu, Tao Gui, Qi Zhang, Xuanjing Huang
Recent works on Lottery Ticket Hypothesis have shown that pre-trained language models (PLMs) contain smaller matching subnetworks(winning tickets) which are capable of reaching accuracy comparable to the original models.
no code implementations • 28 Oct 2022 • Qi Zhang, Jiang Zhu, Fengzhong Qu, De Wen Soh
In addition, a two-stage US LSE (USLSE) is proposed, where the line spectral signal is first recovered by iteratively executing DP and OMP, and then the parameters are estimated by applying a state-of-the-art LSE algorithm.
no code implementations • 27 Oct 2022 • Rishabh Gupta, Qi Zhang
In this work, we consider the inverse problem where we use prior decision data to uncover the underlying decision-making process in the form of a mathematical optimization model.
no code implementations • 24 Oct 2022 • Nan Li, Alexandros Iosifidis, Qi Zhang
To solve the maximization problem, we propose a graph reinforcement learning-based early-exit mechanism (GRLE), which outperforms the state-of-the-art work, deep reinforcement learning-based online offloading (DROO) and its enhanced method, DROO with early-exit mechanism (DROOE), under different dynamic scenarios.
no code implementations • 21 Oct 2022 • Qi Zhang, Zhongchang Sun, Luis C. Herrera, Shaofeng Zou
The WADD is at most of the order of the logarithm of the ARL.
1 code implementation • 18 Oct 2022 • Zhoujin Tian, Chaozhuo Li, Shuo Ren, Zhiqiang Zuo, Zengxuan Wen, Xinyue Hu, Xiao Han, Haizhen Huang, Denvy Deng, Qi Zhang, Xing Xie
Bilingual lexicon induction induces the word translations by aligning independently trained word embeddings in two languages.
2 code implementations • 15 Oct 2022 • Qi Zhang, Yifei Wang, Yisen Wang
Masked Autoencoders (MAE) based on a reconstruction task have risen to be a promising paradigm for self-supervised learning (SSL) and achieve state-of-the-art performance across different benchmark datasets.