no code implementations • ECCV 2020 • Zilong Ji, Xiaolong Zou, Xiaohan Lin, Xiao Liu, Tiejun Huang, Si Wu
By iteratively learning with the two strategies, the attentive regions are gradually shifted from the background to the foreground and the features become more discriminative.
1 code implementation • ACL (dialdoc) 2021 • Liu Yang, Fanqi Meng, Xiao Liu, Ming-Kuang Daniel Wu, Vicent Ying, James Xu
In this work, we formulate a visual dialog as an information flow in which each piece of information is encoded with the joint visual-linguistic representation of a single dialog round.
no code implementations • NAACL 2022 • Nan Hu, Zirui Wu, Yuxuan Lai, Xiao Liu, Yansong Feng
Different from previous fact extraction and verification tasks that only consider evidence of a single format, FEVEROUS brings further challenges by extending the evidence format to both plain text and tables.
no code implementations • ACL 2022 • Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Tam, Zhengxiao Du, Zhilin Yang, Jie Tang
Prompt tuning, which only tunes continuous prompts with a frozen language model, substantially reduces per-task storage and memory usage at training.
no code implementations • Findings (EMNLP) 2021 • Xiao Liu, Juan Hu, Qi Shen, Huan Chen
Finally, we train a BERT-like pre-training model with text and POIs’ graph embeddings to get an integrated representation of both geographic and semantic information, and apply it in the QR of POI search.
no code implementations • 13 May 2024 • Wenqi Dong, Bangbang Yang, Lin Ma, Xiao Liu, Liyuan Cui, Hujun Bao, Yuewen Ma, Zhaopeng Cui
As humans, we aspire to create media content that is both freely willed and readily controlled.
no code implementations • 7 May 2024 • Shudan Zhang, Hanlin Zhao, Xiao Liu, Qinkai Zheng, Zehan Qi, Xiaotao Gu, Xiaohan Zhang, Yuxiao Dong, Jie Tang
To fill this gap, we propose NaturalCodeBench (NCB), a challenging code benchmark designed to mirror the complexity and variety of scenarios in real coding tasks.
1 code implementation • 7 May 2024 • Xiao Liu, Chenxu Zhang, Lei Zhang
In the field of deep learning, state space models are used to process sequence data, such as time series analysis, natural language processing (NLP) and video understanding.
no code implementations • 25 Apr 2024 • Jaime Spencer, Fabio Tosi, Matteo Poggi, Ripudaman Singh Arora, Chris Russell, Simon Hadfield, Richard Bowden, Guangyuan Zhou, Zhengxin Li, Qiang Rao, Yiping Bao, Xiao Liu, Dohyeong Kim, Jinseong Kim, Myunghyun Kim, Mykola Lavreniuk, Rui Li, Qing Mao, Jiang Wu, Yu Zhu, Jinqiu Sun, Yanning Zhang, Suraj Patni, Aradhye Agarwal, Chetan Arora, Pihai Sun, Kui Jiang, Gang Wu, Jian Liu, Xianming Liu, Junjun Jiang, Xidan Zhang, Jianing Wei, Fangjun Wang, Zhiming Tan, Jiabao Wang, Albert Luginov, Muhammad Shahzad, Seyed Hosseini, Aleksander Trajcevski, James H. Elder
This paper discusses the results of the third edition of the Monocular Depth Estimation Challenge (MDEC).
no code implementations • 23 Apr 2024 • Wen Liang, Peipei Ran, Mengchao Bai, Xiao Liu, P. Bilha Githinji, Wei Zhao, Peiwu Qin
To better harness the potential of transformers for SOD, we propose a novel parameter-efficient fine-tuning method aimed at reducing the number of training parameters while enhancing the salient object detection capability.
3 code implementations • 11 Apr 2024 • Zhenghao Lin, Zhibin Gou, Yeyun Gong, Xiao Liu, Yelong Shen, Ruochen Xu, Chen Lin, Yujiu Yang, Jian Jiao, Nan Duan, Weizhu Chen
After fine-tuning, Rho-1-1B and 7B achieved state-of-the-art results of 40. 6% and 51. 8% on MATH dataset, respectively - matching DeepSeekMath with only 3% of the pretraining tokens.
1 code implementation • 4 Apr 2024 • Hanyu Lai, Xiao Liu, Iat Long Iong, Shuntian Yao, Yuxuan Chen, Pengbo Shen, Hao Yu, Hanchen Zhang, Xiaohan Zhang, Yuxiao Dong, Jie Tang
Large language models (LLMs) have fueled many intelligent agent tasks, such as web navigation -- but most existing agents perform far from satisfying in real-world webpages due to three factors: (1) the versatility of actions on webpages, (2) HTML text exceeding model processing capacity, and (3) the complexity of decision-making due to the open-domain nature of web.
1 code implementation • 3 Apr 2024 • Yifan Xu, Xiao Liu, Xinghan Liu, Zhenyu Hou, Yueyan Li, Xiaohan Zhang, Zihan Wang, Aohan Zeng, Zhengxiao Du, Wenyi Zhao, Jie Tang, Yuxiao Dong
Large language models (LLMs) have shown excellent mastering of human language, but still struggle in real-world applications that require mathematical problem-solving.
no code implementations • 1 Apr 2024 • Zhenyu Hou, Yilin Niu, Zhengxiao Du, Xiaohan Zhang, Xiao Liu, Aohan Zeng, Qinkai Zheng, Minlie Huang, Hongning Wang, Jie Tang, Yuxiao Dong
The work presents our practices of aligning LLMs with human preferences, offering insights into the challenges and solutions in RLHF implementations.
2 code implementations • 31 Mar 2024 • Xiao Liu, Xixuan Song, Yuxiao Dong, Jie Tang
In this work, we introduce Self-Contrast, a feedback-free large language model alignment method via exploiting extensive self-generated negatives.
no code implementations • 29 Mar 2024 • Xiao Liu, Jiawei Zhang
This study introduces GPTA, a Large Language Model assistance training framework, that enhances the training of downstream task models via prefix prompt.
1 code implementation • 26 Mar 2024 • Wangyue Li, Liangzhi Li, Tong Xiang, Xiao Liu, Wei Deng, Noa Garcia
Additionally, we propose two methods to quantify the consistency and confidence of LLMs' output, which can be generalized to other QA evaluation benchmarks.
no code implementations • 5 Mar 2024 • Yuan Lin, Antai Xie, Xiao Liu
Most of the current studies on autonomous vehicle decision-making and control tasks based on reinforcement learning are conducted in simulated environments.
no code implementations • 4 Mar 2024 • Yiming Huang, Xiao Liu, Yeyun Gong, Zhibin Gou, Yelong Shen, Nan Duan, Weizhu Chen
Large language models (LLMs) have shown great potential in complex reasoning tasks, yet their performance is often hampered by the scarcity of high-quality and reasoning-focused training datasets.
Ranked #30 on Math Word Problem Solving on MATH
no code implementations • 1 Mar 2024 • Jiandong Jin, Bowen Tang, Mingxuan Ma, Xiao Liu, Yunfei Wang, Qingnan Lai, Jia Yang, Changling Zhou
We introduces Crimson, a system that enhances the strategic reasoning capabilities of Large Language Models (LLMs) within the realm of cybersecurity.
no code implementations • 1 Mar 2024 • Ruichen Xu, Xiao Liu, Jinming Xu, Yuan Lin
We introduce safe hybrid-action reinforcement learning into discretionary lane change for the first time and propose Parameterized Soft Actor-Critic with PID Lagrangian (PASAC-PIDLag) algorithm.
1 code implementation • 29 Feb 2024 • Chen Zhang, Xiao Liu, Jiuheng Lin, Yansong Feng
Existing large language models struggle to support numerous low-resource languages, particularly the extremely low-resource ones where there is minimal training data available for effective parameter updating.
1 code implementation • 27 Feb 2024 • Xiao Liu, Zirui Wu, Xueqing Wu, Pan Lu, Kai-Wei Chang, Yansong Feng
To address this gap, we introduce the Quantitative Reasoning with Data (QRData) benchmark, aiming to evaluate Large Language Models' capability in statistical and causal reasoning with real-world data.
no code implementations • 26 Feb 2024 • Xiao Liu, Mingyuan Li, Xu Wang, Guangsheng Yu, Wei Ni, Lixiang Li, Haipeng Peng, Renping Liu
To address this, we propose Blockchained Federated Unlearning (BlockFUL), a generic framework that redesigns the blockchain structure using Chameleon Hash (CH) technology to mitigate the complexity of model updating, thereby reducing the computational and consensus costs of unlearning tasks. Furthermore, BlockFUL supports various federated unlearning methods, ensuring the integrity and traceability of model updates, whether conducted in parallel or serial.
no code implementations • 26 Feb 2024 • Peng Gao, Xiao Liu, Yu Wang, Ru-Yue Yuan
To expedite the search process, a random channel selection strategy is employed prior to assessing operation candidates.
no code implementations • 23 Feb 2024 • Francis Engelmann, Ayca Takmaz, Jonas Schult, Elisabetta Fedele, Johanna Wald, Songyou Peng, Xi Wang, Or Litany, Siyu Tang, Federico Tombari, Marc Pollefeys, Leonidas Guibas, Hongbo Tian, Chunjie Wang, Xiaosheng Yan, Bingwen Wang, Xuanyang Zhang, Xiao Liu, Phuc Nguyen, Khoi Nguyen, Anh Tran, Cuong Pham, Zhening Huang, Xiaoyang Wu, Xi Chen, Hengshuang Zhao, Lei Zhu, Joan Lasenby
This report provides an overview of the challenge hosted at the OpenSUN3D Workshop on Open-Vocabulary 3D Scene Understanding held in conjunction with ICCV 2023.
no code implementations • 23 Feb 2024 • Yiran Liu, Ke Yang, Zehan Qi, Xiao Liu, Yang Yu, ChengXiang Zhai
The growing integration of large language models (LLMs) into social operations amplifies their impact on decisions in crucial areas such as economics, law, education, and healthcare, raising public concerns about these models' discrimination-related safety and reliability.
no code implementations • 22 Feb 2024 • Yu Gu, Yiheng Shu, Hao Yu, Xiao Liu, Yuxiao Dong, Jie Tang, Jayanth Srinivasa, Hugo Latapie, Yu Su
The applications of large language models (LLMs) have expanded well beyond the confines of text processing, signaling a new era where LLMs are envisioned as generalist language agents capable of operating within complex real-world environments.
no code implementations • 16 Feb 2024 • Jun Cen, Chenfei Wu, Xiao Liu, Shengming Yin, Yixuan Pei, Jinglong Yang, Qifeng Chen, Nan Duan, JianGuo Zhang
Large Language Models (LLMs) and Large Multi-modality Models (LMMs) have demonstrated remarkable decision masking capabilities on a variety of tasks.
no code implementations • 27 Jan 2024 • Xiao Liu, Alessandra Mileo, Alan F. Smeaton
In-situ monitoring incorporating data from visual and other sensor technologies, allows the collection of extensive datasets during the Additive Manufacturing (AM) process.
no code implementations • 25 Jan 2024 • Guangyi Chen, Yifan Shen, Zhenhao Chen, Xiangchen Song, Yuewen Sun, Weiran Yao, Xiao Liu, Kun Zhang
Identifying the underlying time-delayed latent causal processes in sequential data is vital for grasping temporal dynamics and making downstream reasoning.
no code implementations • 14 Jan 2024 • Xiao Liu, Jie Zhao, Wubing Chen, Mao Tan, Yongxing Su
To address this issue, we propose a novel self-interpretable structure, named Backbone Extract Tree (BET), to better explain the agent's behavior by identify the error-prone states.
1 code implementation • 10 Jan 2024 • Xiao Liu, Yansong Feng, Kai-Wei Chang
Motivated by the definition of probability of sufficiency (PS) in the causal literature, we proposeCASA, a zero-shot causality-driven argument sufficiency assessment framework.
1 code implementation • 8 Jan 2024 • Lijun Zhang, Xiao Liu, Antoni Viros Martin, Cindy Xiong Bearfield, Yuriy Brun, Hui Guan
Watermarking images is critical for tracking image provenance and claiming ownership.
no code implementations • 4 Dec 2023 • Yiming Huang, Zhenghao Lin, Xiao Liu, Yeyun Gong, Shuai Lu, Fangyu Lei, Yaobo Liang, Yelong Shen, Chen Lin, Nan Duan, Weizhu Chen
Large language models (LLMs) have demonstrated impressive reasoning capabilities, yet there is ongoing debate about these abilities and the potential data contamination problem recently.
2 code implementations • 30 Nov 2023 • Pei Ke, Bosi Wen, Zhuoer Feng, Xiao Liu, Xuanyu Lei, Jiale Cheng, Shengyuan Wang, Aohan Zeng, Yuxiao Dong, Hongning Wang, Jie Tang, Minlie Huang
Since the natural language processing (NLP) community started to make large language models (LLMs), such as GPT-4, act as a critic to evaluate the quality of generated texts, most of them only train a critique generation model of a specific scale on specific datasets.
2 code implementations • 30 Nov 2023 • Xiao Liu, Xuanyu Lei, Shengyuan Wang, Yue Huang, Zhuoer Feng, Bosi Wen, Jiale Cheng, Pei Ke, Yifan Xu, Weng Lam Tam, Xiaohan Zhang, Lichao Sun, Hongning Wang, Jing Zhang, Minlie Huang, Yuxiao Dong, Jie Tang
We will provide public APIs for evaluating AlignBench with CritiqueLLM to facilitate the evaluation of LLMs' Chinese alignment.
no code implementations • 21 Nov 2023 • Yawen Guo, Xiao Liu, Anjana Susarla, Rema Padman
This study utilizes data analysis methods to retrieve medical information from YouTube videos concerning colonoscopy to manage health conditions.
1 code implementation • 21 Nov 2023 • Xiao Liu, Jianfeng Lin, Jiawei Zhang
The proliferation of Large Language Models like ChatGPT has significantly advanced language understanding and generation, impacting a broad spectrum of applications.
1 code implementation • 7 Nov 2023 • Jiale Cheng, Xiao Liu, Kehan Zheng, Pei Ke, Hongning Wang, Yuxiao Dong, Jie Tang, Minlie Huang
However, these models are often not well aligned with human intents, which calls for additional treatments on them, that is, the alignment problem.
no code implementations • 1 Nov 2023 • Konstantinos Vilouras, Xiao Liu, Pedro Sanchez, Alison Q. O'Neil, Sotirios A. Tsaftaris
Knowledge distillation enables fast and effective transfer of features learned from a bigger model to a smaller one.
no code implementations • 19 Oct 2023 • Bangbang Yang, Wenqi Dong, Lin Ma, WenBo Hu, Xiao Liu, Zhaopeng Cui, Yuewen Ma
To ensure meaningful and aligned textures to the scene, we develop a novel coarse-to-fine panoramic texture generation approach with dual texture alignment, which both considers the geometry and texture cues of the captured scenes.
1 code implementation • 19 Oct 2023 • Aohan Zeng, Mingdao Liu, Rui Lu, Bowen Wang, Xiao Liu, Yuxiao Dong, Jie Tang
Though many prompting methods have been proposed to complete particular agent tasks, there is lack of research focusing on improving the agent capabilities of LLMs themselves without compromising their general abilities.
no code implementations • 10 Oct 2023 • Xiao Liu, Antanas Kascenas, Hannah Watson, Sotirios A. Tsaftaris, Alison Q. O'Neil
For brain tumour segmentation, deep learning models can achieve human expert-level performance given a large amount of data and pixel-level annotations.
1 code implementation • 2 Oct 2023 • Naitik Khandelwal, Xiao Liu, Mengmi Zhang
To address the lack of continual learning methodologies in SGG, we introduce the comprehensive Continual ScenE Graph Generation (CSEGG) dataset along with 3 learning scenarios and 8 evaluation metrics.
1 code implementation • 13 Sep 2023 • Zhexin Zhang, Leqi Lei, Lindong Wu, Rui Sun, Yongkang Huang, Chong Long, Xiao Liu, Xuanyu Lei, Jie Tang, Minlie Huang
Notably, SafetyBench also incorporates both Chinese and English data, facilitating the evaluation in both languages.
no code implementations • 12 Sep 2023 • Xiao Liu, Wubing Chen, Mao Tan
We then design a fidelity-induced mechanism by integrate a fidelity measurement into the reinforcement learning feedback.
1 code implementation • 28 Aug 2023 • Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, Yuxiao Dong, Jie Tang, Juanzi Li
In this paper, we introduce LongBench, the first bilingual, multi-task benchmark for long context understanding, enabling a more rigorous evaluation of long context understanding.
1 code implementation • ICCV 2023 • Guangyi Chen, Xiao Liu, Guangrun Wang, Kun Zhang, Philip H. S. Torr, Xiao-Ping Zhang, Yansong Tang
To bridge these gaps, in this paper, we propose Tem-Adapter, which enables the learning of temporal dynamics and complex semantics by a visual Temporal Aligner and a textual Semantic Aligner.
Ranked #1 on Video Question Answering on SUTD-TrafficQA
1 code implementation • ICCV 2023 • Xin Lin, Chao Ren, Xiao Liu, Jie Huang, Yinjie Lei
Although unsupervised approaches based on generative adversarial networks offer a promising solution for denoising without paired datasets, they are difficult in surpassing the performance limitations of conventional GAN-based unsupervised frameworks without significantly modifying existing structures or increasing the computational complexity of denoisers.
1 code implementation • 7 Aug 2023 • Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting.
1 code implementation • ICCV 2023 • Yizhong Pan, Xiao Liu, Xiangyu Liao, Yuanzhouhan Cao, Chao Ren
With sufficient paired training samples, the supervised deep learning methods have attracted much attention in image denoising because of their superior performance.
no code implementations • ICCV 2023 • WenBo Hu, Yuling Wang, Lin Ma, Bangbang Yang, Lin Gao, Xiao Liu, Yuewen Ma
Despite the tremendous progress in neural radiance fields (NeRF), we still face a dilemma of the trade-off between quality and efficiency, e. g., MipNeRF presents fine-detailed and anti-aliased renderings but takes days for training, while Instant-ngp can accomplish the reconstruction in a few minutes but suffers from blurring or aliasing when rendering at various distances or resolutions due to ignoring the sampling area.
no code implementations • 14 Jul 2023 • Xiao Liu, Alessandra Mileo, Alan F. Smeaton
The development of computer vision and in-situ monitoring using visual sensors allows the collection of large datasets from the additive manufacturing (AM) process.
1 code implementation • 7 Jul 2023 • Xiao Liu, Guangyi Chen, Yansong Tang, Guangrun Wang, Xiao-Ping Zhang, Ser-Nam Lim
Composing simple elements into complex concepts is crucial yet challenging, especially for 3D action generation.
no code implementations • 4 Jul 2023 • Kaihui Cheng, Chule Yang, Xiao Liu, Naiyang Guan, Zhiyuan Wang
Few-shot classification aims to adapt to new tasks with limited labeled examples.
1 code implementation • 4 Jul 2023 • Dongsheng Luo, Yuchen Bian, Yaowei Yan, Xiong Yu, Jun Huan, Xiao Liu, Xiang Zhang
To take advantage of rich information in multiple networks and make better inferences on entities, in this study, we propose random walk on multiple networks, RWM.
no code implementations • 21 Jun 2023 • Zheng Wang, Xiaoliang Fan, Zhaopeng Peng, Xueheng Li, Ziqi Yang, Mingkuan Feng, Zhicheng Yang, Xiao Liu, Cheng Wang
Federated learning (FL) has found numerous applications in healthcare, finance, and IoT scenarios.
no code implementations • 13 Jun 2023 • Xiao Liu, Pedro Sanchez, Spyridon Thermos, Alison Q. O'Neil, Sotirios A. Tsaftaris
By modelling the compositional representations with learnable von-Mises-Fisher (vMF) kernels, we explore how different design and learning biases can be used to enforce the representations to be more compositionally equivariant under un-, weakly-, and semi-supervised settings.
2 code implementations • 13 Jun 2023 • Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, Jie Tang
We present WebGLM, a web-enhanced question-answering system based on the General Language Model (GLM).
1 code implementation • 1 Jun 2023 • Chen Zhang, Jiuheng Lin, Xiao Liu, Yuxuan Lai, Yansong Feng, Dongyan Zhao
We further analyze how well different paradigms of current multi-answer MRC models deal with different types of multi-answer instances.
no code implementations • 31 May 2023 • Haopeng Zhang, Xiao Liu, Jiawei Zhang
The extended structural context has made scientific paper summarization a challenging task.
1 code implementation • 30 May 2023 • Xiao Liu, Da Yin, Chen Zhang, Yansong Feng, Dongyan Zhao
Causal reasoning, the ability to identify cause-and-effect relationship, is crucial in human thinking.
no code implementations • 24 May 2023 • Kejuan Yang, Xiao Liu, Kaiwen Men, Aohan Zeng, Yuxiao Dong, Jie Tang
We identify two crucial limitations in the evaluation of recent parallel-integrated method Parallel Context Windows (PCW), which extends the maximum context lengths of language models, e. g., 2048 for LLaMA, by harnessing window-wise attention and positional embedding techniques.
1 code implementation • 24 May 2023 • Hao Sun, Xiao Liu, Yeyun Gong, Yan Zhang, Daxin Jiang, Linjun Yang, Nan Duan
With the advance of large language models (LLMs), the research field of LLM applications becomes more and more popular and the idea of constructing pipelines to accomplish complex tasks by stacking LLM API calls come true.
1 code implementation • 24 May 2023 • Haopeng Zhang, Xiao Liu, Jiawei Zhang
Text summarization systems have made significant progress in recent years, but typically generate summaries in one single step.
1 code implementation • 23 May 2023 • Da Yin, Xiao Liu, Fan Yin, Ming Zhong, Hritik Bansal, Jiawei Han, Kai-Wei Chang
Instruction tuning has emerged to enhance the capabilities of large language models (LLMs) to comprehend instructions and generate appropriate responses.
1 code implementation • 23 May 2023 • Junyuan Ouyang, Xiao Liu, Haoyao Chen
While point-based neural architectures have demonstrated their efficacy, the time-consuming sampler currently prevents them from performing real-time reasoning on scene-level point clouds.
1 code implementation • 20 May 2023 • Lijun Zhang, Xiao Liu, Kaleel Mahmood, Caiwen Ding, Hui Guan
We then introduce a novel attack framework, the Gradient Balancing Multi-Task Attack (GB-MTA), which treats attacking a multi-task model as an optimization problem.
1 code implementation • NeurIPS 2023 • Tong Wu, Zhihao Fan, Xiao Liu, Yeyun Gong, Yelong Shen, Jian Jiao, Hai-Tao Zheng, Juntao Li, Zhongyu Wei, Jian Guo, Nan Duan, Weizhu Chen
Diffusion models have gained significant attention in the realm of image generation due to their exceptional performance.
no code implementations • 16 May 2023 • Bo wang, Heyan Huang, Xiaochi Wei, Ge Shi, Xiao Liu, Chong Feng, Tong Zhou, Shuaiqiang Wang, Dawei Yin
Event extraction aims to recognize pre-defined event triggers and arguments from texts, which suffer from the lack of high-quality annotations.
1 code implementation • 10 May 2023 • Yiqing Xie, Xiao Liu, Chenyan Xiong
Based on their commonalities, we train an unsupervised dense retriever, Anchor-DR, with a contrastive learning task that matches the anchor text and the linked document.
no code implementations • 6 May 2023 • Weijia Wang, Xuequan Lu, Di Shao, Xiao Liu, Richard Dazeley, Antonio Robles-Kelly, Wei Pan
Existing normal estimation methods for point clouds are often less robust to severe noise and complex geometric structures.
1 code implementation • 2 May 2023 • Haopeng Zhang, Xiao Liu, Jiawei Zhang
This paper proposes DiffuSum, a novel paradigm for extractive summarization, by directly generating the desired summary sentence representations with diffusion models and extracting sentences based on sentence representation matching.
1 code implementation • Tiny Papers @ ICLR 2023 • Xiao Liu, Jian Zhang, Heng Zhang, Fuzhao Xue, Yang You
We evaluate our model on various dialogue understanding tasks including dialogue relation extraction, dialogue emotion recognition, and dialogue act classification.
Ranked #1 on Dialog Relation Extraction on DialogRE
1 code implementation • NeurIPS 2023 • Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, Yuxiao Dong
We present a comprehensive solution to learn and improve text-to-image models from human preference feedback.
2 code implementations • 10 Apr 2023 • Zhenyu Hou, Yufei He, Yukuo Cen, Xiao Liu, Yuxiao Dong, Evgeny Kharlamov, Jie Tang
Graph self-supervised learning (SSL), including contrastive and generative approaches, offers great potential to address the fundamental challenge of label scarcity in real-world graph data.
no code implementations • 9 Apr 2023 • Haopeng Zhang, Xiao Liu, Jiawei Zhang
In addition, we explore the effectiveness of in-context learning and chain-of-thought reasoning for enhancing its performance.
1 code implementation • 8 Feb 2023 • Xiao Liu, Kyongmin Yeo
The irregular sampling scheme is the general scenario, while computationally efficient solutions are available in the spectral domain for non-uniform and shifted uniform sampling.
1 code implementation • 15 Dec 2022 • Kun Zhou, Xiao Liu, Yeyun Gong, Wayne Xin Zhao, Daxin Jiang, Nan Duan, Ji-Rong Wen
Pre-trained Transformers (\eg BERT) have been commonly used in existing dense retrieval methods for parameter initialization, and recent studies are exploring more effective pre-training tasks for further improving the quality of dense vectors.
no code implementations • 12 Dec 2022 • Xiao Liu, Alan F. Smeaton, Alessandra Mileo
More specifically, this paper will look at two scenarios: firstly, using convolutional neural networks (CNNs) to automatically inspect and classify emission data collected by in-situ monitoring and secondly, applying Active Learning techniques to the developed classification model to construct a human-in-the-loop mechanism in order to accelerate the labeling process of the emission data.
1 code implementation • 10 Dec 2022 • Hao Sun, Xiao Liu, Yeyun Gong, Anlei Dong, Jingwen Lu, Yan Zhang, Linjun Yang, Rangan Majumder, Nan Duan
Knowledge distillation is often used to transfer knowledge from a strong teacher model to a relatively weak student model.
no code implementations • 29 Nov 2022 • Arnold Overwijk, Chenyan Xiong, Xiao Liu, Cameron VandenBerg, Jamie Callan
ClueWeb22, the newest iteration of the ClueWeb line of datasets, provides 10 billion web pages affiliated with rich information.
1 code implementation • 28 Nov 2022 • Zihan Chen, Ziyue Wang, JunJie Huang, Wentao Zhao, Xiao Liu, Dejian Guan
Adding perturbations via utilizing auxiliary gradient information or discarding existing details of the benign images are two common approaches for generating adversarial examples.
no code implementations • 23 Nov 2022 • Xiao Liu, Ankur Sikarwar, Gabriel Kreiman, Zenglin Shi, Mengmi Zhang
To better accommodate the object-centric nature of current downstream tasks such as object recognition and detection, various methods have been proposed to suppress contextual biases or disentangle objects from contexts.
no code implementations • 14 Nov 2022 • Yiran Liu, Xiao Liu, Haotian Chen, Yang Yu
We use our theoretical framework to explain why the current debiasing methods cause performance degradation.
no code implementations • 25 Oct 2022 • Chenguang Wang, Xiao Liu, Dawn Song
Instead of focusing on pre-defined relations, we create an OIE benchmark aiming to fully examine the open relational information present in the pre-trained LMs.
1 code implementation • 21 Oct 2022 • Kun Zhou, Yeyun Gong, Xiao Liu, Wayne Xin Zhao, Yelong Shen, Anlei Dong, Jingwen Lu, Rangan Majumder, Ji-Rong Wen, Nan Duan, Weizhu Chen
Thus, we propose a simple ambiguous negatives sampling method, SimANS, which incorporates a new sampling probability distribution to sample more ambiguous negatives.
1 code implementation • 20 Oct 2022 • Xiao Liu, Yansong Feng, Jizhi Tang, Chengang Hu, Dongyan Zhao
Although pretrained language models can generate fluent recipe texts, they fail to truly learn and use the culinary knowledge in a compositional way.
1 code implementation • 12 Oct 2022 • Pedro Sanchez, Xiao Liu, Alison Q O'Neil, Sotirios A. Tsaftaris
We introduce theory for updating the learned Hessian without re-training the neural network, and we show that computing with a subset of samples gives an accurate approximation of the ordering, which allows scaling to datasets with more samples and variables.
1 code implementation • 10 Oct 2022 • Wubing Chen, Wenbin Li, Xiao Liu, Shangdong Yang, Yang Gao
Empirically, we evaluate MAPPG on the well-known matrix game and differential game, and verify that MAPPG can converge to the global optimum for both discrete and continuous action spaces.
Multi-agent Reinforcement Learning reinforcement-learning +3
1 code implementation • 9 Oct 2022 • Haopeng Zhang, Xiao Liu, Jiawei Zhang
Extractive summarization for long documents is challenging due to the extended structured input context.
no code implementations • 8 Oct 2022 • Xiao Liu, Lijun Zhang, Hui Guan
Message passing neural networks (MPNNs) learn the representation of graph-structured data based on graph original information, including node features and graph structures, and have shown astonishing improvement in node classification tasks.
Ranked #5 on Node Classification on arXiv-year
10 code implementations • 5 Oct 2022 • Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma, Yufei Xue, Jidong Zhai, WenGuang Chen, Peng Zhang, Yuxiao Dong, Jie Tang
We introduce GLM-130B, a bilingual (English and Chinese) pre-trained language model with 130 billion parameters.
Ranked #1 on Language Modelling on CLUE (OCNLI_50K)
1 code implementation • 27 Sep 2022 • Zhenghao Lin, Yeyun Gong, Xiao Liu, Hang Zhang, Chen Lin, Anlei Dong, Jian Jiao, Jingwen Lu, Daxin Jiang, Rangan Majumder, Nan Duan
It is common that a better teacher model results in a bad student via distillation due to the nonnegligible gap between teacher and student.
1 code implementation • 24 Aug 2022 • Fengji Zhang, Jin Liu, Yao Wan, Xiao Yu, Xiao Liu, Jacky Keung
Stack Overflow is one of the most popular programming communities where developers can seek help for their encountered problems.
1 code implementation • 16 Aug 2022 • Xiao Liu, Shiyu Zhao, Kai Su, Yukuo Cen, Jiezhong Qiu, Mengdi Zhang, Wei Wu, Yuxiao Dong, Jie Tang
In this work, we present the Knowledge Graph Transformer (kgTransformer) with masked pre-training and fine-tuning strategies.
1 code implementation • 6 Aug 2022 • Xiao Liu, Spyridon Thermos, Pedro Sanchez, Alison Q. O'Neil, Sotirios A. Tsaftaris
Maximisation of mutual information is achieved by introducing an auxiliary network and training with a latent regression loss.
1 code implementation • 25 Jul 2022 • Pedro Sanchez, Antanas Kascenas, Xiao Liu, Alison Q. O'Neil, Sotirios A. Tsaftaris
This requires training with healthy and unhealthy data in DPMs.
2 code implementations • 23 Jul 2022 • Bohan Li, Ye Yuan, Dingkang Liang, Xiao Liu, Zhilong Ji, Jinfeng Bai, Wenyu Liu, Xiang Bai
Recently, most handwritten mathematical expression recognition (HMER) methods adopt the encoder-decoder networks, which directly predict the markup sequences from formula images with the attention mechanism.
no code implementations • 16 Jul 2022 • Fanglin Chen, Xiao Liu, Bo Tang, Feiyu Xiong, Serim Hwang, Guomian Zhuang
During deployment, we combine the offline RL model with the LP model to generate a robust policy under the budget constraints.
2 code implementations • 14 Jul 2022 • Weng Lam Tam, Xiao Liu, Kaixuan Ji, Lilong Xue, Xingjian Zhang, Yuxiao Dong, Jiahua Liu, Maodi Hu, Jie Tang
By updating only 0. 1% of the model parameters, the prompt tuning strategy can help retrieval models achieve better generalization performance than traditional methods in which all parameters are updated.
no code implementations • 13 Jul 2022 • Krishna Pothugunta, Xiao Liu, Anjana Susarla, Rema Padman
Studies suggest that one in three US adults use the Internet to diagnose or learn about a health concern.
1 code implementation • 29 Jun 2022 • Xiao Liu, Spyridon Thermos, Pedro Sanchez, Alison Q. O'Neil, Sotirios A. Tsaftaris
Moreover, with a reconstruction module, unlabeled data can also be used to learn the vMF kernels and likelihoods by recombining them to reconstruct the input image.
no code implementations • 29 Jun 2022 • Ruolin Su, Xiao Liu, Sotirios A. Tsaftaris
With the advent of AI learned on data, one can imagine that such rights can extent to requests for forgetting knowledge of patient's data within AI models.
1 code implementation • 23 Jun 2022 • Guanzhou Wei, Venkat Krishnan, Yu Xie, Manajit Sengupta, Yingchen Zhang, Haitao Liao, Xiao Liu
Increasingly frequent wildfires significantly affect solar energy production as the atmospheric aerosols generated by wildfires diminish the incoming solar radiation to the earth.
no code implementations • 22 Jun 2022 • Xiao Liu, Xinchao Liu
When a ROM, constructed using the POD basis obtained from training data, is applied to new parameter settings, the model often lacks robustness against the change of parameters in design, control, and other real-time operation problems.
no code implementations • 25 May 2022 • Yili Shen, Xiao Liu, Cheng-Wei Ju, Jiaxu Yan, Jun Yi, Zhou Lin, Hui Guan
Subgraph representation learning based on Graph Neural Network (GNN) has exhibited broad applications in scientific advancements, such as predictions of molecular structure-property relationships and collective cellular function.
3 code implementations • 22 May 2022 • Zhenyu Hou, Xiao Liu, Yukuo Cen, Yuxiao Dong, Hongxia Yang, Chunjie Wang, Jie Tang
Despite this, contrastive learning-which heavily relies on structural data augmentation and complicated training strategies-has been the dominant approach in graph SSL, while the progress of generative SSL on graphs, especially graph autoencoders (GAEs), has thus far not reached the potential as promised in other fields.
Ranked #1 on Node Classification on Cora: fixed 20 node per class
1 code implementation • Findings (ACL) 2022 • Chenguang Wang, Xiao Liu, Zui Chen, Haoyun Hong, Jie Tang, Dawn Song
We introduce a method for improving the structural understanding abilities of language models.
Ranked #1 on Open Information Extraction on Penn Treebank
no code implementations • ACL 2022 • Xiao Liu, Heyan Huang, Ge Shi, Bo wang
We consider event extraction in a generative manner with template-based conditional generation.
1 code implementation • CVPR 2022 • Yupeng Shi, Xiao Liu, Yuxiang Wei, Zhongqin Wu, WangMeng Zuo
Semantic image synthesis is a challenging task with many practical applications.
no code implementations • 4 Apr 2022 • Xuri Ge, Joemon M. Jose, Songpei Xu, Xiao Liu, Hu Han
While the region-level feature learning from local face patches features via graph neural network can encode the correlation across different AUs, the pixel-wise and channel-wise feature learning via graph attention network can enhance the discrimination ability of AU features from global face features.
1 code implementation • 23 Mar 2022 • Xiao Liu, Bonan Gao, Basem Suleiman, Han You, Zisu Ma, Yu Liu, Ali Anaissi
Recommender systems have been successfully used in many domains with the help of machine learning algorithms.
1 code implementation • ACL 2022 • Xiao Liu, Da Yin, Yansong Feng, Dongyan Zhao
We probe PLMs and models with visual signals, including vision-language pretrained models and image synthesis models, on this benchmark, and find that image synthesis models are more capable of learning accurate and consistent spatial knowledge than other models.
1 code implementation • 10 Mar 2022 • Lijun Zhang, Xiao Liu, Hui Guan
Tree-structured multi-task architectures have been employed to jointly tackle multiple vision tasks in the context of multi-task learning (MTL).
no code implementations • 3 Mar 2022 • Xuri Ge, Joemon M. Jose, Pengcheng Wang, Arunachalam Iyer, Xiao Liu, Hu Han
In this paper, we propose a novel Adaptive Local-Global Relational Network (ALGRNet) for facial AU detection and use it to classify facial paralysis severity.
2 code implementations • CVPR 2022 • Ye Yuan, Xiao Liu, Wondimu Dikubab, Hui Liu, Zhilong Ji, Zhongqin Wu, Xiang Bai
In this paper, we propose a simple and efficient method for HMER, which is the first to incorporate syntax information into an encoder-decoder network.
1 code implementation • 2 Mar 2022 • Xiao Liu, Haoyun Hong, Xinghao Wang, Zeyi Chen, Evgeny Kharlamov, Yuxiao Dong, Jie Tang
We present SelfKG with efficient strategies to optimize this objective for aligning entities without label supervision.
no code implementations • 2 Mar 2022 • Xiao Liu, Shuyang Liu, Wenbin Li, Shangdong Yang, Yang Gao
Although deep reinforcement learning has become a universal solution for complex control tasks, its real-world applicability is still limited because lacking security guarantees for policies.
1 code implementation • 12 Jan 2022 • Tanish Tyagi, Colin G. Magdamo, Ayush Noori, Zhaozhi Li, Xiao Liu, Mayuresh Deodhar, Zhuoqiao Hong, Wendong Ge, Elissa M. Ye, Yi-han Sheu, Haitham Alabsi, Laura Brenner, Gregory K. Robbins, Sahar Zafar, Nicole Benson, Lidia Moura, John Hsu, Alberto Serrano-Pozo, Dimitry Prokopenko, Rudolph E. Tanzi, Bradley T. Hyman, Deborah Blacker, Shibani S. Mukerji, M. Brandon Westover, Sudeshna Das
Dementia related cognitive impairment (CI) is a neurodegenerative disorder, affecting over 55 million people worldwide and growing rapidly at the rate of one new case every 3 seconds.
no code implementations • 6 Jan 2022 • Di Shao, Xuequan Lu, Xiao Liu
While most existing deep learning research focused on medical images in a supervised way, we introduce an unsupervised method for the detection of intracranial aneurysms based on 3D point cloud data.
1 code implementation • CVPR 2022 • Boyun Li, Xiao Liu, Peng Hu, Zhongqin Wu, Jiancheng Lv, Xi Peng
In this paper, we study a challenging problem in image restoration, namely, how to develop an all-in-one method that could recover images from a variety of unknown corruption types and levels.
1 code implementation • 2 Dec 2021 • Jie Ren, Wenteng Liang, Ran Yan, Luo Mai, Shiwen Liu, Xiao Liu
Large-scale Bundle Adjustment (BA) requires massive memory and computation resources which are difficult to be fulfilled by existing BA libraries.
1 code implementation • NeurIPS 2021 • Zhenyu Huang, guocheng niu, Xiao Liu, Wenbiao Ding, Xinyan Xiao, Hua Wu, Xi Peng
Based on this observation, we reveal and study a latent and challenging direction in cross-modal matching, named noisy correspondence, which could be regarded as a new paradigm of noisy labels.
1 code implementation • CVPR 2022 • Yikang Ding, Wentao Yuan, Qingtian Zhu, Haotian Zhang, Xiangyue Liu, Yuanjiang Wang, Xiao Liu
We analogize MVS back to its nature of a feature matching task and therefore propose a powerful Feature Matching Transformer (FMT) to leverage intra- (self-) and inter- (cross-) attention to aggregate long-range context information within and across images.
Ranked #8 on 3D Reconstruction on DTU
no code implementations • 25 Nov 2021 • Shuxue Peng, Zihang He, Haotian Zhang, Ran Yan, Chuting Wang, Qingtian Zhu, Xiao Liu
In this paper, we present a visual localization pipeline, namely MegLoc, for robust and accurate 6-DoF pose estimation under varying scenarios, including indoor and outdoor scenes, different time across a day, different seasons across a year, and even across years.
no code implementations • 19 Nov 2021 • Yuezhou Sun, Wenlong Zhao, Lijun Zhang, Xiao Liu, Hui Guan, Matei Zaharia
This paper investigates deep neural network (DNN) compression from the perspective of compactly representing and storing trained parameters.
1 code implementation • 13 Nov 2021 • Tanish Tyagi, Colin G. Magdamo, Ayush Noori, Zhaozhi Li, Xiao Liu, Mayuresh Deodhar, Zhuoqiao Hong, Wendong Ge, Elissa M. Ye, Yi-han Sheu, Haitham Alabsi, Laura Brenner, Gregory K. Robbins, Sahar Zafar, Nicole Benson, Lidia Moura, John Hsu, Alberto Serrano-Pozo, Dimitry Prokopenko, Rudolph E. Tanzi, Bradley T. Hyman, Deborah Blacker, Shibani S. Mukerji, M. Brandon Westover, Sudeshna Das
Automated mining of these notes presents an opportunity to label patients with cognitive impairment in EHR data.
no code implementations • 4 Nov 2021 • Yixuan Zou, Yuanwei Liu, Kaifeng Han, Xiao Liu, Kok Keong Chai
Extensive simulation results demonstrate that the proposed QoS-based NOMA network achieves significantly higher transmission throughput compared to the conventional orthogonal multiple access (OMA) network.
1 code implementation • 25 Oct 2021 • Lijun Zhang, Xiao Liu, Hui Guan
The first challenge is to determine what parameters to share across tasks to optimize for both memory efficiency and task accuracy.
no code implementations • 20 Oct 2021 • Weijia Wang, Xuequan Lu, Dasith de Silva Edirimuni, Xiao Liu, Antonio Robles-Kelly
It consists of two phases: (a) feature encoding which learns representations of local patches, and (b) normal estimation that takes the learned representation as input and regresses the normal vector.
4 code implementations • 14 Oct 2021 • Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Lam Tam, Zhengxiao Du, Zhilin Yang, Jie Tang
Prompt tuning, which only tunes continuous prompts with a frozen language model, substantially reduces per-task storage and memory usage at training.
no code implementations • 29 Sep 2021 • Xiao Liu, Meng Wang, Zhaorong Wang, Yingfeng Chen, Yujing Hu, Changjie Fan, Chongjie Zhang
Imitation learning is one of the methods for reproducing expert demonstrations adaptively by learning a mapping between observations and actions.
1 code implementation • EMNLP 2021 • Chenguang Wang, Xiao Liu, Zui Chen, Haoyun Hong, Jie Tang, Dawn Song
We cast a suite of information extraction tasks into a text-to-triple translation framework.
Ranked #1 on Open Information Extraction on OIE2016 (using extra training data)
no code implementations • 17 Sep 2021 • Meixiang Quan, Zheng Chai, Xiao Liu
Lines provide the significantly richer geometric structural information about the environment than points, so lines are widely used in recent Visual Odometry (VO) works.
1 code implementation • 26 Aug 2021 • Xiao Liu, Pedro Sanchez, Spyridon Thermos, Alison Q. O'Neil, Sotirios A. Tsaftaris
Disentangled representation learning has been proposed as an approach to learning general representations even in the absence of, or with limited, supervision.
1 code implementation • ICCV 2021 • Yuxiang Wei, Yupeng Shi, Xiao Liu, Zhilong Ji, Yuan Gao, Zhongqin Wu, WangMeng Zuo
It simply encourages the variation of output caused by perturbations on different latent dimensions to be orthogonal, and the Jacobian with respect to the input is calculated to represent this variation.
no code implementations • 10 Aug 2021 • Xin Wang, Yasheng Wang, Fei Mi, Pingyi Zhou, Yao Wan, Xiao Liu, Li Li, Hao Wu, Jin Liu, Xin Jiang
Code representation learning, which aims to encode the semantics of source code into distributed vectors, plays an important role in recent deep-learning-based models for code intelligence.
no code implementations • 10 Aug 2021 • Xiaopeng Bi, Ran Yan, Zheng Chai, Haotian Zhang, Xiao Liu
This report describes Megvii-3D team's approach towards SimLocMatch Challenge @ CVPR 2021 Image Matching Workshop.
no code implementations • 10 Aug 2021 • Xiaopeng Bi, Yu Chen, Xinyang Liu, Dehao Zhang, Ran Yan, Zheng Chai, Haotian Zhang, Xiao Liu
This report describes Megvii-3D team's approach towards CVPR 2021 Image Matching Workshop.
no code implementations • 5 Aug 2021 • Yuanhang Zhang, Susan Liang, Shuang Yang, Xiao Liu, Zhongqin Wu, Shiguang Shan, Xilin Chen
Our solution is a novel, unified framework that focuses on jointly modeling multiple types of contextual information: spatial context to indicate the position and scale of each candidate's face, relational context to capture the visual relationships among the candidates and contrast audio-visual affinities with each other, and temporal context to aggregate long-term information and smooth out local uncertainties.
no code implementations • 5 Aug 2021 • Xuri Ge, Fuhai Chen, Joemon M. Jose, Zhilong Ji, Zhongqin Wu, Xiao Liu
In this work, we propose to address the above issue from two aspects: (i) constructing intrinsic structure (along with relations) among the fragments of respective modalities, e. g., "dog $\to$ play $\to$ ball" in semantic structure for an image, and (ii) seeking explicit inter-modal structural and semantic correspondence between the visual and textual modalities.
no code implementations • 23 Jul 2021 • Lijun Zhang, Qizheng Yang, Xiao Liu, Hui Guan
One common sharing practice is to share the bottom layers of a deep neural network among domains while using separate top layers for each domain.
no code implementations • 20 Jul 2021 • Mingjie He, Jie Zhang, Shiguang Shan, Xiao Liu, Zhongqin Wu, Xilin Chen
Furthermore, by randomly dropping out several feature channels, our method can well simulate the occlusion of larger area.
no code implementations • 14 Jul 2021 • Boyun Li, Yijie Lin, Xiao Liu, Peng Hu, Jiancheng Lv, Xi Peng
To generate plausible haze, we study two less-touched but challenging problems in hazy image rendering, namely, i) how to estimate the transmission map from a single image without auxiliary information, and ii) how to adaptively learn the airlight from exemplars, i. e., unpaired real hazy images.
1 code implementation • 5 Jul 2021 • Xin Cai, BoYu Chen, Jiabei Zeng, Jiajun Zhang, Yunjia Sun, Xiao Wang, Zhilong Ji, Xiao Liu, Xilin Chen, Shiguang Shan
This paper presents a method for gaze estimation according to face images.
1 code implementation • 4 Jul 2021 • Spyridon Thermos, Xiao Liu, Alison O'Neil, Sotirios A. Tsaftaris
Motivated by the ability to disentangle images into spatial anatomy (tensor) factors and accompanying imaging (vector) representations, we propose a framework termed "disentangled anatomy arithmetic", in which a generative model learns to combine anatomical factors of different input images such that when they are re-entangled with the desired imaging modality (e. g. MRI), plausible new cardiac images are created with the target characteristics.
no code implementations • 3 Jul 2021 • Xiao Liu, Rong pan
Boost-R constructs an ensemble of gradient boosted additive trees to estimate the cumulative intensity function of the recurrent event process, where a new tree is added to the ensemble by minimizing the regularized L2 distance between the observed and predicted cumulative intensity.
no code implementations • 2 Jul 2021 • Pengcheng Wang, Lingqiao Ji, Zhilong Ji, Yuan Gao, Xiao Liu
In this technical report, we briefly introduce the solution of our team "TAL-ai" for (Semi-) supervised Face detection in the low light condition in UG2+ Challenge in CVPR 2021.
2 code implementations • 24 Jun 2021 • Xiao Liu, Spyridon Thermos, Alison O'Neil, Sotirios A. Tsaftaris
We explicitly model the representations related to domain shifts.
no code implementations • 18 Jun 2021 • Baoming Yan, Lin Wang, Ke Gao, Bo Gao, Xiao Liu, Chao Ban, Jiang Yang, Xiaobo Li
Video affective understanding, which aims to predict the evoked expressions by the video content, is desired for video creation and recommendation.
1 code implementation • 17 Jun 2021 • Xiao Liu, Haoyun Hong, Xinghao Wang, Zeyi Chen, Evgeny Kharlamov, Yuxiao Dong, Jie Tang
We present SelfKG by leveraging this discovery to design a contrastive learning strategy across two KGs.
no code implementations • 14 Jun 2021 • Xiao Liu, XiaoFei Si, Jiangtao Xie
Benefiting from the edge information and edge attention loss, the proposed EANet achieves 86. 16\% accuracy in the Short-video Face Parsing track of the 3rd Person in Context (PIC) Workshop and Challenge, ranked the third place.
no code implementations • 14 Jun 2021 • Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan YAO, Ao Zhang, Liang Zhang, Wentao Han, Minlie Huang, Qin Jin, Yanyan Lan, Yang Liu, Zhiyuan Liu, Zhiwu Lu, Xipeng Qiu, Ruihua Song, Jie Tang, Ji-Rong Wen, Jinhui Yuan, Wayne Xin Zhao, Jun Zhu
Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved great success and become a milestone in the field of artificial intelligence (AI).
no code implementations • The ActivityNet Large-Scale Activity Recognition Challenge Workshop, CVPR 2021 • Yuanhang Zhang, Susan Liang, Shuang Yang, Xiao Liu, Zhongqin Wu, Shiguang Shan
This report presents a brief description of our method for the AVA Active Speaker Detection (ASD) task at ActivityNet Challenge 2021.
no code implementations • ICCV 2021 • Haotian Zhang, Yicheng Luo, Fangbo Qin, Yijia He, Xiao Liu
The line description ability of ELSD also outperforms the previous works on the line matching task.
Ranked #1 on Line Segment Detection on wireframe dataset
1 code implementation • 28 Apr 2021 • Manyu Zhu, Dongliang He, Xin Li, Chao Li, Fu Li, Xiao Liu, Errui Ding, Zhaoxiang Zhang
Inpainting arbitrary missing regions is challenging because learning valid features for various masked regions is nontrivial.
Ranked #4 on Image Inpainting on CelebA-HQ
1 code implementation • NAACL 2021 • Xiao Liu, Da Yin, Yansong Feng, Yuting Wu, Dongyan Zhao
Causal inference is the process of capturing cause-effect relationship among variables.
9 code implementations • ACL 2022 • Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, Jie Tang
On a wide range of tasks across NLU, conditional and unconditional generation, GLM outperforms BERT, T5, and GPT given the same model sizes and data, and achieves the best performance from a single pretrained model with 1. 25x parameters of BERT Large , demonstrating its generalizability to different downstream tasks.
Ranked #4 on Language Modelling on WikiText-103 (using extra training data)
7 code implementations • 18 Mar 2021 • Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, Jie Tang
Prompting a pretrained language model with natural language patterns has been proved effective for natural language understanding (NLU).
1 code implementation • 18 Mar 2021 • Yue Cao, Xiaohe Wu, Shuran Qi, Xiao Liu, Zhongqin Wu, WangMeng Zuo
To begin with, the pre-trained denoiser is used to generate the pseudo clean images for the test images.
1 code implementation • 4 Mar 2021 • Fanjin Zhang, Jie Tang, Xueyi Liu, Zhenyu Hou, Yuxiao Dong, Jing Zhang, Xiao Liu, Ruobing Xie, Kai Zhuang, Xu Zhang, Leyu Lin, Philip S. Yu
"Top Stories" is a novel friend-enhanced recommendation engine in WeChat, in which users can read articles based on preferences of both their own and their friends.
Graph Representation Learning Social and Information Networks
1 code implementation • 3 Mar 2021 • Xiao Liu, Da Yin, Jingnan Zheng, Xingjian Zhang, Peng Zhang, Hongxia Yang, Yuxiao Dong, Jie Tang
Academic knowledge services have substantially facilitated the development of the science enterprise by providing a plenitude of efficient research tools.
no code implementations • 24 Feb 2021 • Xuejun Li, Tianxiang Chen, Dong Yuan, Jia Xu, Xiao Liu
To achieve better Quality of Service (QoS), for instance, faster response time and lower energy consumption, computation offloading is widely used in the MEC environment.
Edge-computing Distributed, Parallel, and Cluster Computing C.2.4
1 code implementation • 20 Feb 2021 • Chenglin Pan, Kuan Yan, Xiao Liu, Yanjie Chen, Yanyan Luo, Xiaoming Li, Zhenguo Nie, Xinjun Liu
Artificial intelligence methods have been increasingly turning into a potentially powerful tool in the diagnosis and management of diseases.
1 code implementation • 16 Feb 2021 • Jintang Li, Kun Xu, Liang Chen, Zibin Zheng, Xiao Liu
Graph Neural Networks (GNNs) have recently shown to be powerful tools for representing and analyzing graph data.
no code implementations • 27 Jan 2021 • Zhong Yang, Mingzhe Chen, Xiao Liu, Yuanwei Liu, Yue Chen, Shuguang Cui, H. Vincent Poor
To this end, the fundamentals of this framework are first introduced.
no code implementations • 20 Jan 2021 • Xingyin Fu, Zheng Fang, Xizhen Xiao, Yijia He, Xiao Liu
In this paper, we propose an improved Signed Distance Function (SDF) for both 2D SLAM and pure localization to improve the accuracy of mapping and localization.
no code implementations • 5 Jan 2021 • Xiao Liu, Yuanwei Liu, Zhong Yang, Xinwei Yue, Chuan Wang, Yue Chen
A novel framework is proposed to integrate communication, control and computing (3C) into the fifth-generation and beyond (5GB) wireless networks for satisfying the ultra-reliable low-latency connectivity requirements of remote-e-Health systems.
no code implementations • 1 Jan 2021 • Xiao Liu, Heyan Huang, Yue Zhang
News-driven stock prediction investigates the correlation between news events and stock price movements.
no code implementations • 21 Dec 2020 • Jiaheng Xie, Xiao Liu
Although deep learning champions viewership prediction, it lacks interpretability, which is fundamental to increasing the adoption of predictive models and prescribing measurements to improve viewership.
no code implementations • 16 Dec 2020 • Lijun Zhang, Xiao Liu, Erik Learned-Miller, Hui Guan
When capturing images in low-light conditions, the images often suffer from low visibility, which not only degrades the visual aesthetics of images, but also significantly degenerates the performance of many computer vision algorithms.
no code implementations • 9 Dec 2020 • Yuanwei Liu, Xiao Liu, Xinyu Gao, Xidong Mu, Xiangwei Zhou, Octavia A. Dobre, H. Vincent Poor
Furthermore, dynamic trajectory design and resource allocation for both indoor and outdoor robots are provided to verify the performance of robotic communications in the context of typical robotic application scenarios.
Robotics Systems and Control Signal Processing Systems and Control
no code implementations • 7 Dec 2020 • Wanli Ni, Xiao Liu, Yuanwei Liu, Hui Tian, Yue Chen
This paper proposes a novel framework of resource allocation in intelligent reflecting surface (IRS) aided multi-cell non-orthogonal multiple access (NOMA) networks, where a sum-rate maximization problem is formulated.
no code implementations • 23 Nov 2020 • Ruikang Zhong, Xiao Liu, Yuanwei Liu, Yue Chen, Xianbin Wang
Our simulation results demonstrate that 1) With the aid of NOMA techniques, the communication reliability of IRs is effectively improved; 2) The radio map is qualified to be a virtual training environment, and its statistical channel state information improves training efficiency by about 30%; 3) The proposed DT-DPG algorithm is superior to the conventional deep deterministic policy gradient (DDPG) algorithm in terms of optimization performance, training time, and anti-local optimum ability.
2 code implementations • 22 Oct 2020 • Chenguang Wang, Xiao Liu, Dawn Song
This paper shows how to construct knowledge graphs (KGs) from pre-trained language models (e. g., BERT, GPT-2/3), without human supervision.
no code implementations • 20 Oct 2020 • Kerui Peng, Yana Safonova, Mikhail Shugay, Alice Popejoy, Oscar Rodriguez, Felix Breden, Petter Brodin, Amanda M. Burkhardt, Carlos Bustamante, Van-Mai Cao-Lormeau, Martin M. Corcoran, Darragh Duffy, Macarena Fuentes Guajardo, Ricardo Fujita, Victor Greiff, Vanessa D. Jonsson, Xiao Liu, Lluis Quintana-Murci, Maura Rossetti, Jianming Xie, Gur Yaari, Wei zhang, Malak S. Abedalthagafi, Khalid O. Adekoya, Rahaman A. Ahmed, Wei-Chiao Chang, Clive Gray, Yusuke Nakamura, William D. Lees, Purvesh Khatri, Houda Alachkar, Cathrine Scheepers, Corey T. Watson, Gunilla B. Karlsson Hedestam, Serghei Mangul
With the advent of high-throughput sequencing technologies, the fields of immunogenomics and adaptive immune receptor repertoire research are facing both opportunities and challenges.
no code implementations • 18 Oct 2020 • Ruikang Zhong, Xiao Liu, Yuanwei Liu, Yue Chen
A novel framework is proposed for cellular offloading with the aid of multiple unmanned aerial vehicles (UAVs), while non-orthogonal multiple access (NOMA) technique is employed at each UAV to further improve the spectrum efficiency of the wireless network.
no code implementations • 18 Oct 2020 • Ruikang Zhong, Xiao Liu, Yuanwei Liu, Yue Chen
Afterward, a mutual deep Q-network (MDQN) algorithm is proposed to jointly determine the optimal 3D trajectory and power allocation of UAVs.
no code implementations • 16 Oct 2020 • Xiao Liu, Jiajie Zhang, Siting Li, Zuotong Wu, Yang Yu
We discover that pixel normalization causes object entanglement by in-painting the area occupied by ablated objects.
no code implementations • 6 Oct 2020 • Xiao Liu, Yuanwei Liu, Yue Chen
The energy consumption minimizing problem is formulated by jointly designing the movement of the UAV, phase shifts of the RIS, power allocation policy from the UAV to MUs, as well as determining the dynamic decoding order.
2 code implementations • ECCV 2020 • Siyu Huang, Fangbo Qin, Pengfei Xiong, Ning Ding, Yijia He, Xiao Liu
To realize one-step detection with a faster and more compact model, we introduce the tri-points representation, converting the line segment detection to the end-to-end prediction of a root-point and two endpoints for each line segment.
Ranked #2 on Line Segment Detection on York Urban Dataset
4 code implementations • 27 Aug 2020 • Xiao Liu, Spyridon Thermos, Gabriele Valvano, Agisilaos Chartsias, Alison O'Neil, Sotirios A. Tsaftaris
In this paper, we conduct an empirical study to investigate the role of different biases in content-style disentanglement settings and unveil the relationship between the degree of disentanglement and task performance.
1 code implementation • 26 Aug 2020 • Xiao Liu, Spyridon Thermos, Agisilaos Chartsias, Alison O'Neil, Sotirios A. Tsaftaris
Robust cardiac image segmentation is still an open challenge due to the inability of the existing methods to achieve satisfactory performance on unseen data of different domains.
1 code implementation • 13 Aug 2020 • Qingkai Min, Libo Qin, Zhiyang Teng, Xiao Liu, Yue Zhang
Dialogue state modules are a useful component in a task-oriented dialogue system.
no code implementations • 7 Jul 2020 • Yuanwei Liu, Xiao Liu, Xidong Mu, Tianwei Hou, Jiaqi Xu, Marco Di Renzo, Naofal Al-Dhahir
In this context, we provide a comprehensive overview of the state-of-the-art on RISs, with focus on their operating principles, performance evaluation, beamforming design and resource management, applications of machine learning to RIS-enhanced wireless networks, as well as the integration of RISs with other emerging technologies.
no code implementations • 21 Jun 2020 • Wanli Ni, Xiao Liu, Yuanwei Liu, Hui Tian, Yue Chen
This paper proposes a novel framework of resource allocation in multi-cell intelligent reflecting surface (IRS) aided non-orthogonal multiple access (NOMA) networks, where an IRS is deployed to enhance the wireless service.
no code implementations • 15 Jun 2020 • Xiao Liu, Fanjin Zhang, Zhenyu Hou, Zhaoyu Wang, Li Mian, Jing Zhang, Jie Tang
As an alternative, self-supervised learning attracts many researchers for its soaring performance on representation learning in the last several years.
no code implementations • 22 May 2020 • Wenjie Huang, Jing Jiang, Xiao Liu
In this paper, novel gradient-based online learning algorithms are developed to investigate an important environmental application: real-time river pollution source identification, which aims at estimating the released mass, location, and time of a river pollution source based on downstream sensor data monitoring the pollution concentration.
no code implementations • 21 May 2020 • Fanglin Chen, Xiao Liu, Davide Proserpio, Isamar Troncoso, Feiyu Xiong
We show that, compared with state-of-the-art models, our approach is faster, and can produce more accurate demand forecasts and price elasticities.
1 code implementation • ACL 2020 • Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Dongyan Zhao
This paper presents Neighborhood Matching Network (NMN), a novel entity alignment framework for tackling the structural heterogeneity challenge.
1 code implementation • NAACL (SocialNLP) 2021 • Zach Wood-Doughty, Paiheng Xu, Xiao Liu, Mark Dredze
We present a method to identify self-reports of race and ethnicity from Twitter profile descriptions.
1 code implementation • 30 Apr 2020 • Baichuan Huang, Hongwei Yi, Can Huang, Yijia He, Jingbin Liu, Xiao Liu
To improve the robustness and completeness of point cloud reconstruction, we propose a novel multi-metric loss function that combines pixel-wise and feature-wise loss function to learn the inherent constraints from different perspectives of matching correspondences.
no code implementations • 21 Apr 2020 • Xiao Liu, Sotirios A. Tsaftaris
In the era of deep learning, aggregation of data from several sources is a common approach to ensuring data diversity.
1 code implementation • 21 Apr 2020 • Baichuan Huang, Hongwei Yi, Can Huang, Yijia He, Jingbin Liu, Xiao Liu
To improve the robustness and completeness of point cloud reconstruction, we propose a novel multi-metric loss function that combines pixel-wise and feature-wise loss function to learn the inherent constraints from different perspectives of matching correspondences.
no code implementations • 21 Apr 2020 • Pengcheng Wang, ZiHao Wang, Zhilong Ji, Xiao Liu, Songfan Yang, Zhongqin Wu
This paper introduces our approach to the EmotioNet Challenge 2020.
no code implementations • 19 Apr 2020 • Yong Wang, Qi Liu, Hongyu Zu, Xiao Liu, Ruichao Xie, Feng Wang
Pixel-wise operations between polarimetric images are important for processing polarization information.
no code implementations • 16 Apr 2020 • Xin Li, Yijia He, Jinlong Lin, Xiao Liu
To improve the accuracy of 3D mesh generation and localization, we propose a tightly-coupled monocular VIO system, PLP-VIO, which exploits point features and line features as well as plane regularities.
no code implementations • 4 Apr 2020 • Xiao Liu, He-Yan Huang, Yue Zhang, Changsen Yuan
Thanks to the use of attention over news events, our model is also more explainable.