no code implementations • COLING 2022 • Alex X. Zhang, Xun Liang, Bo Wu, Xiangping Zheng, Sensen Zhang, Yuhui Guo, Jun Wang, Xinyao Liu
The human recognition system has presented the remarkable ability to effortlessly learn novel knowledge from only a few trigger events based on prior knowledge, which is called insight learning.
no code implementations • Findings (EMNLP) 2021 • Chang Xu, Jun Wang, Francisco Guzmán, Benjamin Rubinstein, Trevor Cohn
NLP models are vulnerable to data poisoning attacks.
1 code implementation • EMNLP 2021 • Yingya Li, Jun Wang, Bei Yu
We also conducted a case study that applied this prediction model to retrieve specific health advice on COVID-19 treatments from LitCovid, a large COVID research literature portal, demonstrating the usefulness of retrieving health advice sentences as an advanced research literature navigation function for health researchers and the general public.
no code implementations • ACL 2022 • Ling.Yu Zhu, Zhengkun Zhang, Jun Wang, Hongbin Wang, Haiying Wu, Zhenglu Yang
Empathetic dialogue assembles emotion understanding, feeling projection, and appropriate response generation.
no code implementations • SemEval (NAACL) 2022 • Qizhi Lin, Changyu Hou, Xiaopeng Wang, Jun Wang, Yixuan Qiao, Peng Jiang, Xiandi Jiang, Benqi Wang, Qifeng Xiao
From pretrained contextual embedding to document-level embedding, the selection and construction of embedding have drawn more and more attention in the NER domain in recent research.
no code implementations • ACL 2022 • Jun Wang, Benjamin Rubinstein, Trevor Cohn
In this paper we describe a new source of bias prevalent in NMT systems, relating to translations of sentences containing person names.
no code implementations • CCL 2022 • Zekun Deng, Hao Yang, Jun Wang
"《史记》和《汉书》具有经久不衰的研究价值。尽管两书异同的研究已经较为丰富, 但研究的全面性、完备性、科学性、客观性均仍显不足。在数字人文的视角下, 本文利用计算语言学方法, 通过对字、词、命名实体、段落等的多粒度、多角度分析, 开展对于《史》《汉》的比较研究。首先, 本文对于《史》《汉》中的字、词、命名实体的分布和特点进行对比, 以遍历穷举的考察方式提炼出两书在主要内容上的相同点与不同点, 揭示了汉武帝之前和汉武帝到西汉灭亡两段历史时期在政治、文化、思想上的重要变革与承袭。其次, 本文使用一种融入命名实体作为外部特征的文本相似度算法对于《史记》《汉书》的异文进行自动发现, 成功识别出过去研究者通过人工手段没有发现的袭用段落, 使得我们对于《史》《汉》的承袭关系形成更加完整和立体的认识。再次, 本文通过计算异文段落之间的最长公共子序列来自动得出两段异文之间存在的差异, 从宏观统计上证明了《汉书》文字风格《史记》的差别, 并从微观上进一步对二者语言特点进行了阐释, 为理解《史》《汉》异文特点提供了新的角度和启发。本研究站在数字人文的视域下, 利用先进的计算方法对于传世千年的中国古代经典进行了再审视、再发现, 其方法对于今人研究古籍有一定的借鉴价值。”
no code implementations • 19 May 2024 • Xuanli He, Qiongkai Xu, Jun Wang, Benjamin I. P. Rubinstein, Trevor Cohn
Modern NLP models are often trained on public datasets drawn from diverse sources, rendering them vulnerable to data poisoning attacks.
no code implementations • 19 May 2024 • Jun Wang, Benedetta Tondi, Mauro Barni
Extensive efforts have been made to explore unique representations of generative models and use them to attribute a synthetic image to the model that produced it.
no code implementations • 13 May 2024 • Jun Wang, Yu Mao, Yufei Cui, Nan Guan, Chun Jason Xue
Immunohistochemistry (IHC) plays a crucial role in pathology as it detects the over-expression of protein in tissue samples.
no code implementations • 9 May 2024 • Tianfu Qi, Jun Wang, Zexue Zhao
For the MSK demodulation based on the Viterbi algorithm, we derive a lower and upper bound of BER.
no code implementations • 5 May 2024 • Jinmin Li, Tao Dai, Jingyun Zhang, Kang Liu, Jun Wang, Shaoming Wang, Shu-Tao Xia, rizen guo
Recently developed generative methods, including invertible rescaling network (IRN) based and generative adversarial network (GAN) based methods, have demonstrated exceptional performance in image rescaling.
no code implementations • 1 May 2024 • Yu Cui, Feng Liu, Pengbo Wang, Bohao Wang, Heng Tang, Yi Wan, Jun Wang, Jiawei Chen
Owing to their powerful semantic reasoning capabilities, Large Language Models (LLMs) have been effectively utilized as recommenders, achieving impressive performance.
no code implementations • 30 Apr 2024 • Xuanli He, Jun Wang, Qiongkai Xu, Pasquale Minervini, Pontus Stenetorp, Benjamin I. P. Rubinstein, Trevor Cohn
The implications of backdoor attacks on English-centric large language models (LLMs) have been widely examined - such attacks can be achieved by embedding malicious behaviors during training and activated under specific conditions that trigger malicious outputs.
no code implementations • 30 Apr 2024 • Jiabao Wang, Yang Wu, Jun Wang, Ni Chen
The multi-plane phase retrieval method provides a budget-friendly and effective way to perform phase imaging, yet it often encounters alignment challenges due to shifts along the optical axis in experiments.
no code implementations • 30 Apr 2024 • Wentao Lei, Li Liu, Jun Wang
Therefore, we propose a novel Gloss-prompted Diffusion-based CS Gesture generation framework (called GlossDiff).
1 code implementation • 26 Apr 2024 • Kaichen Xu, Yueyang Ding, Suyang Hou, Weiqiang Zhan, Nisang Chen, Jun Wang, Xiaobo Sun
In response, we propose ACSleuth, a novel, reconstruction deviation-guided generative framework that integrates the detection, domain adaptation, and fine-grained annotating of anomalous cells into a methodologically cohesive workflow.
1 code implementation • 24 Apr 2024 • Junfeng Tian, Rui Wang, Cong Li, Yudong Zhou, Jun Liu, Jun Wang
This report details the development and key achievements of our latest language model designed for custom large language models.
1 code implementation • 18 Apr 2024 • Yongcheng Zeng, Guoqing Liu, Weiyu Ma, Ning Yang, Haifeng Zhang, Jun Wang
Fine-tuning pre-trained Large Language Models (LLMs) is essential to align them with human values and intentions.
1 code implementation • ICCV 2023 • Renrong Shao, Wei zhang, Jianhua Yin, Jun Wang
Our approach utilizes an adversarial distillation framework with attention generator, mixed high-order attention distillation, and semantic feature contrast learning.
Data-free Knowledge Distillation Fine-Grained Visual Categorization +1
no code implementations • 18 Apr 2024 • WenHao Zhang, Jun Wang, Yong Luo, Lei Yu, Wei Yu, Zheng He
Then we design a spatio-temporal fusion module based on temporal granularity alignment, where the global spatial features extracted from event frames, together with the local relative spatial and temporal features contained in voxel graph list are effectively aligned and integrated.
no code implementations • 17 Apr 2024 • Jun Wang, Yufei Cui, Yu Mao, Nan Guan, Chun Jason Xue
Our study analyzes the impact of pre-processing parameters on inference and training across single- and multiple-domain datasets.
no code implementations • 17 Apr 2024 • Qiyu Hou, Jun Wang, Meixuan Qiao, Lujun Tian
By leveraging the actual structure and content of tables from Chinese financial announcements, we have developed the first extensive table annotation dataset in this domain.
no code implementations • 9 Apr 2024 • ZhiHao Lin, Wei Ma, Tao Lin, Yaowen Zheng, Jingquan Ge, Jun Wang, Jacques Klein, Tegawende Bissyande, Yang Liu, Li Li
We introduce a governance framework centered on federated learning (FL), designed to foster the joint development and maintenance of open-source AI code models while safeguarding data privacy and security.
no code implementations • 7 Apr 2024 • Haifeng Wang, Hao Xu, Jun Wang, Jian Zhou, Ke Deng
Recognizing various surgical tools, actions and phases from surgery videos is an important problem in computer vision with exciting clinical applications.
1 code implementation • 4 Apr 2024 • Sichen Chen, Yingyi Zhang, Siming Huang, Ran Yi, Ke Fan, Ruixin Zhang, Peixian Chen, Jun Wang, Shouhong Ding, Lizhuang Ma
To mitigate the problem of under-fitting, we design a transformer module named Multi-Cycled Transformer(MCT) based on multiple-cycled forwards to more fully exploit the potential of small model parameters.
no code implementations • 3 Apr 2024 • Jun Wang, Qiongkai Xu, Xuanli He, Benjamin I. P. Rubinstein, Trevor Cohn
Our aim is to bring attention to these vulnerabilities within MNMT systems with the hope of encouraging the community to address security concerns in machine translation, especially in the context of low-resource languages.
no code implementations • 1 Apr 2024 • Zuyu Xu, Kang Shen, Pengnian Cai, Tao Yang, Yuanming Hu, Shixian Chen, Yunlai Zhu, Zuheng Wu, Yuehua Dai, Jun Wang, Fei Yang
The recent emergence of the hybrid quantum-classical neural network (HQCNN) architecture has garnered considerable attention due to the potential advantages associated with integrating quantum principles to enhance various facets of machine learning algorithms and computations.
no code implementations • 26 Mar 2024 • Youpeng Zhao, Di wu, Jun Wang
In a single GPU-CPU system, we demonstrate that under varying workloads, ALISA improves the throughput of baseline systems such as FlexGen and vLLM by up to 3X and 1. 9X, respectively.
no code implementations • 22 Mar 2024 • Xuemei Tang, Zekun Deng, Qi Su, Hao Yang, Jun Wang
Additionally, we have evaluated the capabilities of Large Language Models (LLMs) in the context of tasks related to ancient Chinese history.
2 code implementations • 19 Mar 2024 • Yuxi Mi, Zhizhou Zhong, Yuge Huang, Jiazhen Ji, Jianqing Xu, Jun Wang, Shaoming Wang, Shouhong Ding, Shuigeng Zhou
Recognizable identity features within the image are encouraged by co-training a recognition model on its high-dimensional feature representation.
no code implementations • 14 Mar 2024 • Xihan Li, Xing Li, Lei Chen, Xing Zhang, Mingxuan Yuan, Jun Wang
Then, can circuits also be mastered by a a sufficiently large "circuit model", which can conquer electronic design tasks by simply predicting the next logic gate?
no code implementations • 14 Mar 2024 • Qirui Mi, Zhiyu Zhao, Siyu Xia, Yan Song, Jun Wang, Haifeng Zhang
Effective macroeconomic policies play a crucial role in promoting economic growth and social stability.
no code implementations • 13 Mar 2024 • Ben Athiwaratkun, Sujan Kumar Gonugondla, Sanjay Krishna Gouda, Haifeng Qian, Hantian Ding, Qing Sun, Jun Wang, Jiacheng Guo, Liangfu Chen, Parminder Bhatia, Ramesh Nallapati, Sudipta Sengupta, Bing Xiang
In our study, we present bifurcated attention, a method developed for language model inference in single-context batch sampling contexts.
1 code implementation • 11 Mar 2024 • Siyu Duan, Jun Wang, Qi Su
Cultural heritage serves as the enduring record of human thought and history.
no code implementations • 11 Mar 2024 • Chaochao Chen, Yizhao Zhang, Yuyuan Li, Dan Meng, Jun Wang, Xiaoli Zheng, Jianwei Yin
The first component is distinguishability loss, where we design a distribution-based measurement to make attribute labels indistinguishable from attackers.
no code implementations • 9 Mar 2024 • Jingyun Xue, Tao Wang, Jun Wang, Kaihao Zhang, Wenhan Luo, Wenqi Ren, Zikun Liu, Hyunhee Park, Xiaochun Cao
Specifically, we utilize sparse self-attention to filter out redundant information and noise, directing the model's attention to focus on the features more relevant to the degraded regions in need of reconstruction.
no code implementations • 8 Mar 2024 • Jun Wang, Lixing Zhu, Abhir Bhalerao, Yulan He
Radiology report generation (RRG) methods often lack sufficient medical knowledge to produce clinically accurate reports.
no code implementations • 8 Mar 2024 • Jingxiao Chen, Ziqin Gong, Minghuan Liu, Jun Wang, Yong Yu, Weinan Zhang
To overcome this problem and to have an effective solution against hard constraints, we proposed a novel learning-based method that uses looking-ahead information as the feature to improve the legality of TSP with Time Windows (TSPTW) solutions.
no code implementations • 5 Mar 2024 • Hanlei Jin, Yang Zhang, Dan Meng, Jun Wang, Jinghua Tan
Automatic Text Summarization (ATS), utilizing Natural Language Processing (NLP) algorithms, aims to create concise and accurate summaries, thereby significantly reducing the human effort required in processing large volumes of text.
no code implementations • 28 Feb 2024 • Yang Cao, Shuo Shang, Jun Wang, Wei zhang
This paper explores providing explainability for session-based recommendation (SR) by path reasoning.
no code implementations • 28 Feb 2024 • Youpeng Zhao, Ming Lin, Huadong Tang, Qiang Wu, Jun Wang
Generative Large Language Models (LLMs) stand as a revolutionary advancement in the modern era of artificial intelligence (AI).
1 code implementation • 27 Feb 2024 • Siyuan Guo, Cheng Deng, Ying Wen, Hechang Chen, Yi Chang, Jun Wang
In the development stage, DS-Agent follows the CBR framework to structure an automatic iteration pipeline, which can flexibly capitalize on the expert knowledge from Kaggle, and facilitate consistent performance improvement through the feedback mechanism.
no code implementations • 26 Feb 2024 • Chengzhe Piao, Taiyu Zhu, Stephanie E Baldeweg, Paul Taylor, Pantelis Georgiou, Jiahao Sun, Jun Wang, Kezhi Li
Accurate prediction of future blood glucose (BG) levels can effectively improve BG management for people living with diabetes, thereby reducing complications and improving quality of life.
no code implementations • 23 Feb 2024 • Jun Wang, Guocheng He, Yiannis Kantaros
Several recent works have addressed similar planning problems by leveraging pre-trained Large Language Models (LLMs) to design effective multi-robot plans.
no code implementations • 22 Feb 2024 • Xuemei Tang, Jun Wang, Qi Su
Recently, large language models (LLMs) have been successful in relational extraction (RE) tasks, especially in the few-shot learning.
no code implementations • 22 Feb 2024 • Jun Wang, Yuzhe Qin, Kaiming Kuang, Yigit Korkmaz, Akhilan Gurumoorthy, Hao Su, Xiaolong Wang
We introduce CyberDemo, a novel approach to robotic imitation learning that leverages simulated human demonstrations for real-world tasks.
no code implementations • 20 Feb 2024 • Adam X. Yang, Maxime Robeyns, Thomas Coste, Jun Wang, Haitham Bou-Ammar, Laurence Aitchison
To ensure that large language model (LLM) responses are helpful and non-toxic, we usually fine-tune a reward model on human preference data.
no code implementations • 15 Feb 2024 • Min Zhang, Sato Takumi, Jack Zhang, Jun Wang
Large Language Models (LLMs) excel in generating personalized content and facilitating interactive dialogues, showcasing their remarkable aptitude for a myriad of applications.
no code implementations • 13 Feb 2024 • Zhaoan Wang, Shaoping Xiao, Jun Wang, Ashwin Parab, Shivam Patel
This study examines how artificial intelligence (AI), especially Reinforcement Learning (RL), can be used in farming to boost crop yields, fine-tune nitrogen use and watering, and reduce nitrate runoff and greenhouse gases, focusing on Nitrous Oxide (N$_2$O) emissions from soil.
no code implementations • 11 Feb 2024 • Xidong Feng, Ziyu Wan, Mengyue Yang, Ziyan Wang, Girish A. Koushik, Yali Du, Ying Wen, Jun Wang
Reinforcement Learning (RL) has shown remarkable abilities in learning policies for decision-making tasks.
1 code implementation • 9 Feb 2024 • Muning Wen, Cheng Deng, Jun Wang, Weinan Zhang, Ying Wen
At the heart of ETPO is our novel per-token soft Bellman update, designed to harmonize the RL process with the principles of language modeling.
no code implementations • 9 Feb 2024 • Tianfu Qi, Jun Wang
In the first part, we propose a closed-form heavy-tailed multivariate probability density function (PDF) that to model the bursty mixed noise.
no code implementations • 9 Feb 2024 • Cong Xu, Zhangchi Zhu, Jun Wang, Jianyong Wang, Wei zhang
Large language models (LLMs) have gained much attention in the recommendation community; some studies have observed that LLMs, fine-tuned by the cross-entropy loss with a full softmax, could achieve state-of-the-art performance already.
no code implementations • 8 Feb 2024 • Jun Wang, Haoxuan Li, Chi Zhang, Dongxu Liang, Enyun Yu, Wenwu Ou, Wenjia Wang
Recommender systems are designed to learn user preferences from observed feedback and comprise many fundamental tasks, such as rating prediction and post-click conversion rate (pCVR) prediction.
3 code implementations • 6 Feb 2024 • Jun Wang, Wenjie Du, Wei Cao, Keli Zhang, Wenjia Wang, Yuxuan Liang, Qingsong Wen
In this paper, we conduct a comprehensive survey on the recently proposed deep learning imputation methods.
no code implementations • 23 Jan 2024 • Jiarui Jin, Zexue He, Mengyue Yang, Weinan Zhang, Yong Yu, Jun Wang, Julian McAuley
Subsequently, we minimize the mutual information between the observation estimation and the relevance estimation conditioned on the input features.
no code implementations • 22 Jan 2024 • Chao Song, Zhihao Ye, Qiqiang Lin, Qiuying Peng, Jun Wang
In practice, there are two prevailing ways, in which the adaptation can be achieved: (i) Multiple Independent Models: Pre-trained LLMs are fine-tuned a few times independently using the corresponding training samples from each task.
no code implementations • 18 Jan 2024 • Jun Wang, Chengfeng Zhou, Zhaoyan Ming, Lina Wei, Xudong Jiang, Dahong Qian
One of the fundamental challenges in microscopy (MS) image analysis is instance segmentation (IS), particularly when segmenting cluster regions where multiple objects of varying sizes and shapes may be connected or even overlapped in arbitrary orientations.
1 code implementation • 17 Jan 2024 • Meng Fang, Shilong Deng, Yudi Zhang, Zijing Shi, Ling Chen, Mykola Pechenizkiy, Jun Wang
A wide range of real-world applications is characterized by their symbolic nature, necessitating a strong capability for symbolic reasoning.
no code implementations • 16 Jan 2024 • Huafeng Qin, Yiquan Wu, Mounim A. El-Yacoubi, Jun Wang, Guangxiang Yang
To overcome this problem, in this paper, we propose an adversarial masking contrastive learning (AMCL) approach, that generates challenging samples to train a more robust contrastive learning model for the downstream palm-vein recognition task, by alternatively optimizing the encoder in the contrastive learning model and a set of latent variables.
no code implementations • 10 Jan 2024 • Zekun Deng, Hao Yang, Jun Wang
Some argue that the essence of humanity, such as creativity and sentiment, can never be mimicked by machines.
no code implementations • 4 Jan 2024 • Wenqi Zhang, Yongliang Shen, Linjuan Wu, Qiuying Peng, Jun Wang, Yueting Zhuang, Weiming Lu
Experiments conducted on a series of reasoning and translation tasks with different LLMs serve to underscore the effectiveness and generality of our strategy.
no code implementations • 2 Jan 2024 • Zhaoan Wang, Shaoping Xiao, Junchao Li, Jun Wang
However, our study illuminates the need for agent retraining to acquire new optimal policies under extreme weather events.
no code implementations • 30 Dec 2023 • Jun Wang, Hao Ruan, Mingjie Wang, Chuanghui Zhang, Huachun Li, Jun Zhou
Over the past decade, visual gaze estimation has garnered increasing attention within the research community, owing to its wide-ranging application scenarios.
no code implementations • 27 Dec 2023 • Wei Huang, Jun Wang, Xiaoping Li, Qihang Peng
Orthogonal frequency division multiplexing (OFDM) is a widely adopted wireless communication technique but is sensitive to the carrier frequency offset (CFO).
1 code implementation • 22 Dec 2023 • Long Shi, Lei Cao, Jun Wang, Badong Chen
Specifically, we stack the data matrices from various views into the block-diagonal locations of the augmented matrix to exploit the complementary information.
no code implementations • 22 Dec 2023 • Filippos Christianos, Georgios Papoudakis, Matthieu Zimmer, Thomas Coste, Zhihao Wu, Jingxuan Chen, Khyati Khandelwal, James Doran, Xidong Feng, Jiacheng Liu, Zheng Xiong, Yicheng Luo, Jianye Hao, Kun Shao, Haitham Bou-Ammar, Jun Wang
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
no code implementations • 21 Dec 2023 • Jie Han, Yixiong Zou, Haozhao Wang, Jun Wang, Wei Liu, Yao Wu, Tao Zhang, Ruixuan Li
Therefore, current works first train a model on source domains with sufficiently labeled data, and then transfer the model to target domains where only rarely labeled data is available.
no code implementations • 20 Dec 2023 • Bo Yang, Hong Peng, Xiaohui Luo, Jun Wang
Downsampling in deep networks may lead to loss of information, so for compensating the detail and edge information and allowing convolutional neural networks to pay more attention to seek the lesion region, we propose a multi-stages attention architecture based on NSNP neurons with autapses.
no code implementations • 20 Dec 2023 • Bo Yang, Hong Peng, Chenggang Guo, Xiaohui Luo, Jun Wang, Xianzhong Long
Prompt treatment for melanoma is crucial.
1 code implementation • 19 Dec 2023 • Weiyu Ma, Qirui Mi, Xue Yan, Yuqiao Wu, Runji Lin, Haifeng Zhang, Jun Wang
StarCraft II is a challenging benchmark for AI agents due to the necessity of both precise micro level operations and strategic macro awareness.
no code implementations • 19 Dec 2023 • Yuang Liu, Jing Wang, Qiang Zhou, Fan Wang, Jun Wang, Wei zhang
Numerous self-supervised learning paradigms, such as contrastive learning and masked image modeling, have been proposed to acquire powerful and general representations from unlabeled data.
no code implementations • 18 Dec 2023 • Hanyu Li, Wenhan Huang, Zhijian Duan, David Henry Mguni, Kun Shao, Jun Wang, Xiaotie Deng
This paper reviews various algorithms computing the Nash equilibrium and its approximation solutions in finite normal-form games from both theoretical and empirical perspectives.
no code implementations • 18 Dec 2023 • Rohan Mitta, Hosein Hasanbeig, Jun Wang, Daniel Kroening, Yiannis Kantaros, Alessandro Abate
This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL), such that the safety constraint violations are bounded at any point during learning.
no code implementations • 12 Dec 2023 • Ruijia Chang, Suncheng Xiang, Chengyu Zhou, Kui Su, Dahong Qian, Jun Wang
Chromosome recognition is an essential task in karyotyping, which plays a vital role in birth defect diagnosis and biomedical research.
no code implementations • 9 Dec 2023 • Jiaxuan Liang, Jun Wang, Guoxian Yu, Shuyin Xia, Guoyin Wang
Unveil, model, and comprehend the causal mechanisms underpinning natural phenomena stand as fundamental endeavors across myriad scientific disciplines.
no code implementations • 9 Dec 2023 • Dezhi Yang, Xintong He, Jun Wang, Guoxian Yu, Carlotta Domeniconi, Jinglin Zhang
We design a global optimization formula to naturally aggregate the causal graphs from client data and constrain the acyclicity of the global graph without exposing local data.
no code implementations • 9 Dec 2023 • Cong Su, Guoxian Yu, Jun Wang, Hui Li, Qingzhong Li, Han Yu
Federated learning (FL) has emerged as a promising collaborative and secure paradigm for training a model from decentralized data without compromising privacy.
no code implementations • 7 Dec 2023 • Wei Liu, Haozhao Wang, Jun Wang, Zhiying Deng, Yuankai Zhang, Cheng Wang, Ruixuan Li
Rationalization empowers deep learning models with self-explaining capabilities through a cooperative game, where a generator selects a semantically consistent subset of the input as a rationale, and a subsequent predictor makes predictions based on the selected rationale.
no code implementations • 28 Nov 2023 • Jun Wang, Hosein Hasanbeig, Kaiyuan Tan, Zihe Sun, Yiannis Kantaros
We consider robots with unknown stochastic dynamics operating in environments with unknown geometric structure.
1 code implementation • 28 Nov 2023 • Dayu Hu, Zhibin Dong, Ke Liang, Jun Wang, Siwei Wang, Xinwang Liu
To this end, we introduce scUNC, an innovative multi-view clustering approach tailored for single-cell data, which seamlessly integrates information from different views without the need for a predefined number of clusters.
no code implementations • 17 Nov 2023 • Jun Wang, Haojun Chen, Zihe Sun, Yiannis Kantaros
To the best of our knowledge, this is the first work that designs verified temporal compositions of NN controllers for unknown and stochastic systems.
no code implementations • 27 Oct 2023 • Xue Yan, Yan Song, Xinyu Cui, Filippos Christianos, Haifeng Zhang, David Henry Mguni, Jun Wang
To that purpose, we offer a new leader-follower bilevel framework that is capable of learning to ask relevant questions (prompts) and subsequently undertaking reasoning to guide the learning of actions.
1 code implementation • 21 Oct 2023 • Mengyue Yang, Xinyu Cai, Furui Liu, Weinan Zhang, Jun Wang
Under the hypothesis that the intrinsic latent factors follow some casual generative models, we argue that by learning a causal representation, which is the minimal sufficient causes of the whole system, we can improve the robustness and generalization performance of machine learning models.
no code implementations • 20 Oct 2023 • Rasul Tutunov, Antoine Grosnit, Juliusz Ziomek, Jun Wang, Haitham Bou-Ammar
This paper delves into the capabilities of large language models (LLMs), specifically focusing on advancing the theoretical comprehension of chain-of-thought prompting.
1 code implementation • 17 Oct 2023 • Zongyi Li, Hongbing Lyu, Jun Wang
One of the key designs of U-Net is the use of skip connections between the encoder and decoder, which helps to recover detailed information after upsampling.
1 code implementation • 11 Oct 2023 • Zihan Zhang, Meng Fang, Ling Chen, Mohammad-Reza Namazi-Rad, Jun Wang
Although large language models (LLMs) are impressive in solving various tasks, they can quickly be outdated after deployment.
1 code implementation • 11 Oct 2023 • Lang Qin, Yao Zhang, Hongru Liang, Jun Wang, Zhenglu Yang
Accurate knowledge selection is critical in knowledge-grounded dialogue systems.
1 code implementation • 8 Oct 2023 • Hanjing Wang, Man-Kit Sit, Congjie He, Ying Wen, Weinan Zhang, Jun Wang, Yaodong Yang, Luo Mai
This paper introduces a distributed, GPU-centric experience replay system, GEAR, designed to perform scalable reinforcement learning (RL) with large sequence models (such as transformers).
no code implementations • 6 Oct 2023 • Yuyuan Li, Chaochao Chen, Xiaolin Zheng, Yizhao Zhang, Zhongxuan Han, Dan Meng, Jun Wang
To address the PoT-AU problem in recommender systems, we design a two-component loss function that consists of i) distinguishability loss: making attribute labels indistinguishable from attackers, and ii) regularization loss: preventing drastic changes in the model that result in a negative impact on recommendation performance.
no code implementations • 5 Oct 2023 • Jinting Wang, Li Liu, Jun Wang, Hei Victor Cheng
To overcome this challenge, we introduce the concept of residuals by integrating a statistical face prior to the diffusion process.
1 code implementation • 29 Sep 2023 • Xidong Feng, Ziyu Wan, Muning Wen, Stephen Marcus McAleer, Ying Wen, Weinan Zhang, Jun Wang
Empirical results across reasoning, planning, alignment, and decision-making tasks show that TS-LLM outperforms existing approaches and can handle trees with a depth of 64.
no code implementations • 24 Sep 2023 • Cong Xu, Jun Wang, Jianyong Wang, Wei zhang
Embedding plays a critical role in modern recommender systems because they are virtual representations of real-world entities and the foundation for subsequent decision models.
1 code implementation • NeurIPS 2023 • Wei Liu, Jun Wang, Haozhao Wang, Ruixuan Li, Zhiying Deng, Yuankai Zhang, Yang Qiu
Instead of attempting to rectify the issues of the MMI criterion, we propose a novel criterion to uncover the causal rationale, termed the Minimum Conditional Dependence (MCD) criterion, which is grounded on our finding that the non-causal features and the target label are \emph{d-separated} by the causal rationale.
1 code implementation • NeurIPS 2023 • Mengyue Yang, Zhen Fang, Yonggang Zhang, Yali Du, Furui Liu, Jean-Francois Ton, Jianhong Wang, Jun Wang
To capture the information of sufficient and necessary causes, we employ a classical concept, the probability of sufficiency and necessary causes (PNS), which indicates the probability of whether one is the necessary and sufficient cause.
no code implementations • 21 Sep 2023 • Yidong Liu, FuKai Shang, Fang Wang, Rui Xu, Jun Wang, Wei Li, Yao Li, Conghui He
With the advancement of deep learning technologies, general-purpose large models such as GPT-4 have demonstrated exceptional capabilities across various domains.
no code implementations • 18 Sep 2023 • Jun Wang, Jiaming Tong, Kaiyuan Tan, Yevgeniy Vorobeychik, Yiannis Kantaros
To formally define the overarching mission, we leverage Linear Temporal Logic (LTL) defined over atomic predicates modeling these NL-based sub-tasks.
no code implementations • 8 Sep 2023 • Yang Li, Cheng Yu, Guangzhi Sun, Weiqin Zu, Zheng Tian, Ying Wen, Wei Pan, Chao Zhang, Jun Wang, Yang Yang, Fanglei Sun
Experimental results on the LibriTTS datasets demonstrate that our proposed models significantly enhance speech synthesis and editing, producing more natural and expressive speech.
no code implementations • 4 Sep 2023 • Zhongxuan Han, Chaochao Chen, Xiaolin Zheng, Weiming Liu, Jun Wang, Wenjie Cheng, Yuyuan Li
By combining the fairness loss with the original backbone model loss, we address the UOF issue and maintain the overall recommendation performance simultaneously.
no code implementations • 30 Aug 2023 • Jun Wang, Lixing Zhu, Abhir Bhalerao, Yulan He
Radiology report generation aims to automatically provide clinically meaningful descriptions of radiology images such as MRI and X-ray.
no code implementations • 20 Aug 2023 • Yu Luo, Lina Pu, Jun Wang, Isaac Howard
The experimental results indicate that an active RF-SN embedded in concrete at a depth of 13. 5 cm can be effectively powered by a 915MHz mobile radio transmitter with an effective isotropic radiated power (EIRP) of 32. 5dBm.
no code implementations • 9 Aug 2023 • Yang Li, Kun Xiong, Yingping Zhang, Jiangcheng Zhu, Stephen Mcaleer, Wei Pan, Jun Wang, Zonghong Dai, Yaodong Yang
This paper presents an empirical exploration of non-transitivity in perfect-information games, specifically focusing on Xiangqi, a traditional Chinese board game comparable in game-tree complexity to chess and shogi.
no code implementations • 5 Aug 2023 • Jiarui Jin, Xianyu Chen, Weinan Zhang, Mengyue Yang, Yang Wang, Yali Du, Yong Yu, Jun Wang
Notice that these ranking metrics do not consider the effects of the contextual dependence among the items in the list, we design a new family of simulation-based ranking metrics, where existing metrics can be regarded as special cases.
no code implementations • 3 Aug 2023 • Yuang Liu, Qiang Zhou, Jing Wang, Fan Wang, Jun Wang, Wei zhang
Vision transformers (ViT) usually extract features via forwarding all the tokens in the self-attention layers from top to toe.
no code implementations • 26 Jul 2023 • Xinting Liao, Weiming Liu, Chaochao Chen, Pengyang Zhou, Huabin Zhu, Yanchao Tan, Jun Wang, Yue Qi
Firstly, HPTI in the server constructs uniformly distributed and fixed class prototypes, and shares them with clients to match class statistics, further guiding consistent feature representation for local clients.
1 code implementation • 19 Jul 2023 • Lydia Abady, Jun Wang, Benedetta Tondi, Mauro Barni
In the second setting, the system verifies a claim about the architecture used to generate a synthetic image, utilizing one or multiple reference images generated by the claimed architecture.
no code implementations • 5 Jul 2023 • Saisai Ding, Jun Wang, Juncheng Li, Jun Shi
The PT is developed to reduce redundant instances in bags by integrating prototypical learning into the Transformer architecture.
no code implementations • 3 Jul 2023 • Shuo Chen, Ning Yang, Meng Zhang, Jun Wang
In this paper, we consider multiple users offloading tasks to heterogeneous edge servers in a MEC system.
no code implementations • 24 Jun 2023 • Muning Wen, Runji Lin, Hanjing Wang, Yaodong Yang, Ying Wen, Luo Mai, Jun Wang, Haifeng Zhang, Weinan Zhang
Transformer architectures have facilitated the development of large-scale and general-purpose sequence models for prediction tasks in natural language processing and computer vision, e. g., GPT-3 and Swin Transformer.
1 code implementation • NeurIPS 2023 • Xidong Feng, Yicheng Luo, Ziyan Wang, Hongrui Tang, Mengyue Yang, Kun Shao, David Mguni, Yali Du, Jun Wang
Thus, we propose ChessGPT, a GPT model bridging policy learning and language modeling by integrating data from these two sources in Chess games.
no code implementations • 12 Jun 2023 • Jian Wang, Liang Qiao, Shichong Zhou, Jin Zhou, Jun Wang, Juncheng Li, Shihui Ying, Cai Chang, Jun Shi
To address this issue, a novel Two-Stage Detection and Diagnosis Network (TSDDNet) is proposed based on weakly supervised learning to enhance diagnostic accuracy of the ultrasound-based CAD for breast cancers.
no code implementations • 8 Jun 2023 • Junjie Sheng, Wenhao Li, Bo Jin, Hongyuan Zha, Jun Wang, Xiangfeng Wang
Recent methods have shown that assigning reasoning ability to agents can mitigate RO algorithmically and empirically, but there has been a lack of theoretical understanding of RO, let alone designing provably RO-free methods.
1 code implementation • 7 Jun 2023 • Yusen Zhang, Jun Wang, Zhiguo Wang, Rui Zhang
However, existing CLSP models are separately proposed and evaluated on datasets of limited tasks and applications, impeding a comprehensive and unified evaluation of CLSP on a diverse range of NLs and MRs. To this end, we present XSemPLR, a unified benchmark for cross-lingual semantic parsing featured with 22 natural languages and 8 meaning representations by examining and selecting 9 existing datasets to cover 5 tasks and 164 domains.
1 code implementation • 5 Jun 2023 • Lei Wang, Jingsen Zhang, Hao Yang, ZhiYuan Chen, Jiakai Tang, Zeyu Zhang, Xu Chen, Yankai Lin, Ruihua Song, Wayne Xin Zhao, Jun Xu, Zhicheng Dou, Jun Wang, Ji-Rong Wen
Simulating high quality user behavior data has always been a fundamental problem in human-centered applications, where the major difficulty originates from the intricate mechanism of human decision process.
no code implementations • 3 Jun 2023 • Xuemei Tang, Jun Wang, Qi Su
Recently, it is quite common to integrate Chinese sequence labeling results to enhance syntactic and semantic parsing.
no code implementations • NeurIPS 2023 • Yudi Zhang, Yali Du, Biwei Huang, Ziyan Wang, Jun Wang, Meng Fang, Mykola Pechenizkiy
While the majority of current approaches construct the reward redistribution in an uninterpretable manner, we propose to explicitly model the contributions of state and action from a causal perspective, resulting in an interpretable reward redistribution and preserving policy invariance.
no code implementations • 26 May 2023 • Yingjie Feng, Jun Wang, Xianfeng GU, Xiaoyin Xu, Min Zhang
In diagnosing challenging conditions such as Alzheimer's disease (AD), imaging is an important reference.
no code implementations • 25 May 2023 • Saisai Ding, Juncheng Li, Jun Wang, Shihui Ying, Jun Shi
The key idea of MEGT is to adopt two independent Efficient Graph-based Transformer (EGT) branches to process the low-resolution and high-resolution patch embeddings (i. e., tokens in a Transformer) of WSIs, respectively, and then fuse these tokens via a multi-scale feature fusion module (MFFM).
1 code implementation • 25 May 2023 • Xuanli He, Jun Wang, Benjamin Rubinstein, Trevor Cohn
Backdoor attacks are an insidious security threat against machine learning models.
1 code implementation • 25 May 2023 • Wuwei Lan, Zhiguo Wang, Anuj Chauhan, Henghui Zhu, Alexander Li, Jiang Guo, Sheng Zhang, Chung-Wei Hang, Joseph Lilien, Yiqun Hu, Lin Pan, Mingwen Dong, Jun Wang, Jiarong Jiang, Stephen Ash, Vittorio Castelli, Patrick Ng, Bing Xiang
A practical text-to-SQL system should generalize well on a wide variety of natural language questions, unseen database schemas, and novel SQL query structures.
no code implementations • 24 May 2023 • Zheng Hu, Shi-Min Cai, Jun Wang, Tao Zhou
Thus, the representation of users' dislikes should be integrated into the user modelling when we construct a collaborative recommendation model.
1 code implementation • 23 May 2023 • Wei Liu, Jun Wang, Haozhao Wang, Ruixuan Li, Yang Qiu, Yuankai Zhang, Jie Han, Yixiong Zou
However, such a cooperative game may incur the degeneration problem where the predictor overfits to the uninformative pieces generated by a not yet well-trained generator and in turn, leads the generator to converge to a sub-optimal model that tends to select senseless pieces.
1 code implementation • 19 May 2023 • Xuanli He, Qiongkai Xu, Jun Wang, Benjamin Rubinstein, Trevor Cohn
Modern NLP models are often trained over large untrusted datasets, raising the potential for a malicious adversary to compromise model behaviour.
1 code implementation • 16 May 2023 • Yan Song, He Jiang, Zheng Tian, Haifeng Zhang, Yingping Zhang, Jiangcheng Zhu, Zonghong Dai, Weinan Zhang, Jun Wang
Few multi-agent reinforcement learning (MARL) research on Google Research Football (GRF) focus on the 11v11 multi-agent full-game scenario and to the best of our knowledge, no open benchmark on this scenario has been released to the public.
no code implementations • 16 May 2023 • Desong Du, Shaohang Han, Naiming Qi, Haitham Bou Ammar, Jun Wang, Wei Pan
Reinforcement learning (RL) exhibits impressive performance when managing complicated control tasks for robots.
no code implementations • 12 May 2023 • Jian Zhao, Jianan Li, Lei Jin, Jiaming Chu, Zhihao Zhang, Jun Wang, Jiangqiang Xia, Kai Wang, Yang Liu, Sadaf Gulshad, Jiaojiao Zhao, Tianyang Xu, XueFeng Zhu, Shihan Liu, Zheng Zhu, Guibo Zhu, Zechao Li, Zheng Wang, Baigui Sun, Yandong Guo, Shin ichi Satoh, Junliang Xing, Jane Shen Shengmei
Second, we set up two tracks for the first time, i. e., Anti-UAV Tracking and Anti-UAV Detection & Tracking.
1 code implementation • 8 May 2023 • Wei Liu, Haozhao Wang, Jun Wang, Ruixuan Li, Xinyang Li, Yuankai Zhang, Yang Qiu
Rationalization is to employ a generator and a predictor to construct a self-explaining NLP model in which the generator selects a subset of human-intelligible pieces of the input text to the following predictor.
no code implementations • 5 May 2023 • Yiyi Zhang, Zhiwen Ying, Ying Zheng, Cuiling Wu, Nannan Li, Jun Wang, Xianzhong Feng, Xiaogang Xu
Plant leaf identification is crucial for biodiversity protection and conservation and has gradually attracted the attention of academia in recent years.
no code implementations • 26 Apr 2023 • Xiaorui Wang, Jun Wang, Xin Tang, Peng Gao, Rui Fang, Guotong Xie
Filter pruning is widely adopted to compress and accelerate the Convolutional Neural Networks (CNNs), but most previous works ignore the relationship between filters and channels in different layers.
no code implementations • 26 Apr 2023 • Meixuan Qiao, Jun Wang, Junfu Xiang, Qiyu Hou, Ruixuan Li
Accurately extracting structured data from structure diagrams in financial announcements is of great practical importance for building financial knowledge graphs and further improving the efficiency of various financial applications.
no code implementations • 20 Apr 2023 • Yuyuan Li, Chaochao Chen, Xiaolin Zheng, Yizhao Zhang, Biao Gong, Jun Wang
In this paper, we first identify two main disadvantages of directly applying existing unlearning methods in the context of recommendation, i. e., (i) unsatisfactory efficiency for large-scale recommendation models and (ii) destruction of collaboration across users and items.
no code implementations • 11 Apr 2023 • Jun Wang, Omran Alamayreh, Benedetta Tondi, Mauro Barni
Classification of AI-manipulated content is receiving great attention, for distinguishing different types of manipulations.
no code implementations • 28 Mar 2023 • Dan You, Pengcheng Xia, Qiuzhu Chen, Minghui Wu, Suncheng Xiang, Jun Wang
Automated chromosome instance segmentation from metaphase cell microscopic images is critical for the diagnosis of chromosomal disorders (i. e., karyotype analysis).
1 code implementation • CVPR 2023 • Bo He, Jun Wang, JieLin Qiu, Trung Bui, Abhinav Shrivastava, Zhaowen Wang
The goal of multimodal summarization is to extract the most important information from different modalities to form output summaries.
Ranked #3 on Supervised Video Summarization on SumMe
Extractive Text Summarization Supervised Video Summarization
no code implementations • 12 Mar 2023 • Jun Wang, Klaus Mueller
Furthermore, since an effect can be a cause of other effects, we allow users to aggregate different temporal cause-effect relations found with our method into a visual flow diagram to enable the discovery of temporal causal networks.
1 code implementation • CVPR 2023 • Suhang Ye, Yingyi Zhang, Jie Hu, Liujuan Cao, Shengchuan Zhang, Lei Shen, Jun Wang, Shouhong Ding, Rongrong Ji
Specifically, DistilPose maximizes the transfer of knowledge from the teacher model (heatmap-based) to the student model (regression-based) through Token-distilling Encoder (TDE) and Simulated Heatmaps.
no code implementations • 13 Feb 2023 • Xihuai Wang, Zheng Tian, Ziyu Wan, Ying Wen, Jun Wang, Weinan Zhang
In this paper, we propose the \textbf{A}gent-by-\textbf{a}gent \textbf{P}olicy \textbf{O}ptimization (A2PO) algorithm to improve the sample efficiency and retain the guarantees of monotonic improvement for each agent during training.
no code implementations • 10 Feb 2023 • Jun Wang, Suyi Li
In this paper, we experimentally examine the cognitive capability of a simple, paper-based Miura-ori -- using the physical reservoir computing framework -- to achieve different information perception tasks.
no code implementations • 4 Feb 2023 • Jun Wang, Yue Song, David John Hill, Yunhe Hou, Feilong Fan
To figure out the stability issues brought by renewable energy sources (RES) with non-Gaussian uncertainties in isolated microgrids, this paper proposes a chance constrained stability constrained optimal power flow (CC-SC-OPF) model.
2 code implementations • 21 Jan 2023 • Shuaichen Chang, Jun Wang, Mingwen Dong, Lin Pan, Henghui Zhu, Alexander Hanbo Li, Wuwei Lan, Sheng Zhang, Jiarong Jiang, Joseph Lilien, Steve Ash, William Yang Wang, Zhiguo Wang, Vittorio Castelli, Patrick Ng, Bing Xiang
Neural text-to-SQL models have achieved remarkable performance in translating natural language questions into SQL queries.
no code implementations • 19 Jan 2023 • Chengjie Zhao, Jun Wang, Wei Huang, Xiaonan Chen, Tianfu Qi
Under MGIN channel, classical communication signal schemes and corresponding detection methods usually can not achieve desirable performance as they are optimized with respect to WGN.
1 code implementation • 16 Jan 2023 • Xingzhou Lou, Jiaxian Guo, Junge Zhang, Jun Wang, Kaiqi Huang, Yali Du
We conduct experiments on the Overcooked environment, and evaluate the zero-shot human-AI coordination performance of our method with both behavior-cloned human proxies and real humans.
no code implementations • ICCV 2023 • Hongliang He, Jun Wang, Pengxu Wei, Fan Xu, Xiangyang Ji, Chang Liu, Jie Chen
Experiments on three nuclear instance segmentation datasets justify the superiority of TopoSeg, which achieves state-of-the-art performance.
1 code implementation • 24 Dec 2022 • Ying Wen, Ziyu Wan, Ming Zhou, Shufang Hou, Zhe Cao, Chenyang Le, Jingxiao Chen, Zheng Tian, Weinan Zhang, Jun Wang
The pervasive uncertainty and dynamic nature of real-world environments present significant challenges for the widespread implementation of machine-driven Intelligent Decision-Making (IDM) systems.
no code implementations • 17 Dec 2022 • Yiyun Zhao, Jiarong Jiang, Yiqun Hu, Wuwei Lan, Henry Zhu, Anuj Chauhan, Alexander Li, Lin Pan, Jun Wang, Chung-Wei Hang, Sheng Zhang, Marvin Dong, Joe Lilien, Patrick Ng, Zhiguo Wang, Vittorio Castelli, Bing Xiang
In this paper, we first examined the existing synthesized datasets and discovered that state-of-the-art text-to-SQL algorithms did not further improve on popular benchmarks when trained with augmented synthetic data.
no code implementations • 15 Dec 2022 • Hang Lai, Weinan Zhang, Xialin He, Chen Yu, Zheng Tian, Yong Yu, Jun Wang
Deep reinforcement learning has recently emerged as an appealing alternative for legged locomotion over multiple terrains by training a policy in physical simulation and then transferring it to the real world (i. e., sim-to-real transfer).
no code implementations • 12 Dec 2022 • Lixin Cao, Jun Wang, Ben Yang, Dan Su, Dong Yu
Self-supervised learning (SSL) models confront challenges of abrupt informational collapse or slow dimensional collapse.
no code implementations • 8 Dec 2022 • Kaiyuan Tan, Jun Wang, Yiannis Kantaros
To bridge this gap, in this paper, we propose a targeted adversarial attack against DNN models for trajectory forecasting tasks.
1 code implementation • Neural Computing and Applications 2022 • Xianlin Peng, Huayu Zhao, Xiaoyu Wang, Yongqin Zhang, Zhan Li, Qunxi Zhang, Jun Wang, Jinye Peng, Haida Liang
Our network also uses dual-domain partial convolution with a mask for computing on only valid points, whereas the mask is updated for the next layer.
no code implementations • 5 Dec 2022 • Yourui Huangfu, Jian Wang, Shengchen Dai, Rong Li, Jun Wang, Chongwen Huang, Zhaoyang Zhang
The statistical data hinder the trained AI models from further fine-tuning for a specific scenario, and ray-tracing data with limited environments lower down the generalization capability of the trained AI models.
no code implementations • 28 Nov 2022 • Zijun Gao, Jun Wang, Guoxian Yu, Zhongmin Yan, Carlotta Domeniconi, Jinglin Zhang
LtCMH firstly adopts auto-encoders to mine the individuality and commonality of different modalities by minimizing the dependency between the individuality of respective modalities and by enhancing the commonality of these modalities.
no code implementations • 22 Nov 2022 • Dezhi Yang, Guoxian Yu, Jun Wang, Zhengtian Wu, Maozu Guo
In this paper, we propose {Reinforcement Causal Structure Learning on Order Graph} (RCL-OG) that uses order graph instead of MCMC to model different DAG topological orderings and to reduce the problem size.
no code implementations • 21 Nov 2022 • Junjie Sheng, Lu Wang, Fangkai Yang, Bo Qiao, Hang Dong, Xiangfeng Wang, Bo Jin, Jun Wang, Si Qin, Saravan Rajmohan, QIngwei Lin, Dongmei Zhang
To address these two limitations, this paper formulates the oversubscription for cloud as a chance-constrained optimization problem and propose an effective Chance Constrained Multi-Agent Reinforcement Learning (C2MARL) method to solve this problem.
Multi-agent Reinforcement Learning reinforcement-learning +1
no code implementations • 15 Nov 2022 • Runji Lin, Ye Li, Xidong Feng, Zhaowei Zhang, Xian Hong Wu Fung, Haifeng Zhang, Jun Wang, Yali Du, Yaodong Yang
Firstly, we propose prompt tuning for offline RL, where a context vector sequence is concatenated with the input to guide the conditional policy generation.
1 code implementation • 2 Nov 2022 • Jun Wang, Abhir Bhalerao, Terry Yin, Simon See, Yulan He
Radiology report generation (RRG) has gained increasing research attention because of its huge potential to mitigate medical resource shortages and aid the process of disease decision making by radiologists.
no code implementations • 21 Oct 2022 • Jun Wang, Weixun Li, Changyu Hou, Xin Tang, Yixuan Qiao, Rui Fang, Pengyong Li, Peng Gao, Guotong Xie
Contrastive learning has emerged as a powerful tool for graph representation learning.
no code implementations • 18 Oct 2022 • Yangheng Zhao, Jun Wang, Xiaolong Li, Yue Hu, Ce Zhang, Yanfeng Wang, Siheng Chen
Instead of learning a single prototype for each class, in this paper, we propose to use an adaptive number of prototypes to dynamically describe the different point patterns within a semantic class.
Ranked #17 on 3D Semantic Segmentation on SemanticKITTI
1 code implementation • 15 Oct 2022 • Ziqing Wang, Zhirong Ye, Yuyang Du, Yi Mao, Yanying Liu, Ziling Wu, Jun Wang
DBSCAN has been widely used in density-based clustering algorithms.
1 code implementation • 14 Oct 2022 • Qianying Liu, Chaitanya Kaul, Jun Wang, Christos Anagnostopoulos, Roderick Murray-Smith, Fani Deligianni
For medical image semantic segmentation (MISS), Vision Transformers have emerged as strong alternatives to convolutional neural networks thanks to their inherent ability to capture long-range correlations.
no code implementations • 11 Oct 2022 • You Guo, Jun Wang, Trevor Cohn
Deep neural networks are vulnerable to adversarial attacks, such as backdoor attacks in which a malicious adversary compromises a model during training such that specific behaviour can be triggered at test time by attaching a specific word or phrase to an input.
1 code implementation • 30 Sep 2022 • Donghan Yu, Sheng Zhang, Patrick Ng, Henghui Zhu, Alexander Hanbo Li, Jun Wang, Yiqun Hu, William Wang, Zhiguo Wang, Bing Xiang
Question answering over knowledge bases (KBs) aims to answer natural language questions with factual information such as entities and relations in KBs.
no code implementations • 28 Sep 2022 • Jun Wang, Patrick Ng, Alexander Hanbo Li, Jiarong Jiang, Zhiguo Wang, Ramesh Nallapati, Bing Xiang, Sudipta Sengupta
When synthesizing a SQL query, there is no explicit semantic information of NLQ available to the parser which leads to undesirable generalization performance.
no code implementations • 22 Sep 2022 • Wuque Cai, Hongze Sun, Rui Liu, Yan Cui, Jun Wang, Yang Xia, Dezhong Yao, Daqing Guo
Spiking neural networks (SNNs) mimic brain computational strategies, and exhibit substantial capabilities in spatiotemporal information processing.
no code implementations • 19 Sep 2022 • Hailin Shi, Hang Du, Yibo Hu, Jun Wang, Dan Zeng, Ting Yao
Such multi-shot scheme brings inference burden, and the predefined scales inevitably have gap from real data.
1 code implementation • 17 Sep 2022 • Wei Liu, Haozhao Wang, Jun Wang, Ruixuan Li, Chao Yue, Yuankai Zhang
Conventional works generally employ a two-phase model in which a generator selects the most important pieces, followed by a predictor that makes predictions based on the selected pieces.
no code implementations • 14 Sep 2022 • Jun Wang
Extensive results on two well-known meeting datasets (AMI and ICSI corpora) show the effectiveness of our direct speech-based method to improve the summarization quality with untranscribed data.
no code implementations • 10 Sep 2022 • Alexander I. Cowen-Rivers, Philip John Gorinski, Aivar Sootla, Asif Khan, Liu Furui, Jun Wang, Jan Peters, Haitham Bou Ammar
Optimizing combinatorial structures is core to many real-world problems, such as those encountered in life sciences.
no code implementations • ACL 2022 • Xuemei Tang, Qi Su, Jun Wang
The evolution of language follows the rule of gradual change.
no code implementations • 6 Sep 2022 • Tianfu Qi, Jun Wang, Xiaonan Chen, Wei Huang
In many scenarios, the communication system suffers from both Gaussian white noise and non-Gaussian impulsive noise.
4 code implementations • 5 Sep 2022 • Fei Hu, Honghua Chen, Xuequan Lu, Zhe Zhu, Jun Wang, Weiming Wang, Fu Lee Wang, Mingqiang Wei
We propose a novel stepwise point cloud completion network (SPCNet) for various 3D models with large missings.
no code implementations • 2 Sep 2022 • Taher Jafferjee, Juliusz Ziomek, Tianpei Yang, Zipeng Dai, Jianhong Wang, Matthew Taylor, Kun Shao, Jun Wang, David Mguni
Centralised training with decentralised execution (CT-DE) serves as the foundation of many leading multi-agent reinforcement learning (MARL) algorithms.
Multi-agent Reinforcement Learning reinforcement-learning +3
no code implementations • 2 Sep 2022 • Honghua Chen, Mingqiang Wei, Jun Wang
In this work, we provide a comprehensive review of the advances in mesh denoising, containing both traditional geometric approaches and recent learning-based methods.
1 code implementation • 2 Sep 2022 • Omran Alamayreh, Giovanna Maria Dimitri, Jun Wang, Benedetta Tondi, Mauro Barni
Notably, we found that asking the network to identify the country provides better results than estimating the geo-coordinates and then tracing them back to the country where the picture was taken.
1 code implementation • 30 Aug 2022 • Anyi Huang, Qian Xie, Zhoutao Wang, Dening Lu, Mingqiang Wei, Jun Wang
Second, a multi-scale perception module is designed to embed multi-scale geometric information for each scale feature and regress multi-scale weights to guide a multi-offset denoising displacement.
1 code implementation • 30 Aug 2022 • Wei zhang, Zhaohong Deng, Kup-Sze Choi, Jun Wang, Shitong Wang
Meanwhile, to make the representation learning more specific to the clustering task, a one-step learning framework is proposed to integrate representation learning and clustering partition as a whole.
no code implementations • 30 Aug 2022 • Xintong Qin, Zhengyu Song, Tianwei Hou, Wenjuan Yu, Jun Wang, Xin Sun
The unmanned aerial vehicle (UAV) enabled mobile edge computing (MEC) has been deemed a promising paradigm to provide ubiquitous communication and computing services for the Internet of Things (IoT).
no code implementations • 30 Aug 2022 • Zhengyu Song, Xintong Qin, Yuanyuan Hao, Tianwei Hou, Jun Wang, Xin Sun
Driven by the visions of Internet of Things (IoT), there is an ever-increasing demand for computation resources of IoT users to support diverse applications.
1 code implementation • 24 Aug 2022 • Kai Liang, Jun Wang, Abhir Bhalerao
Previous works often adopt physical variables such as driving speed, acceleration and so forth for lane change classification.
1 code implementation • 21 Aug 2022 • Ashkan Farhangi, Jiang Bian, Arthur Huang, Haoyi Xiong, Jun Wang, Zhishan Guo
Moreover, the framework employs a dynamic uncertainty optimization algorithm that reduces the uncertainty of forecasts in an online manner.
no code implementations • 10 Aug 2022 • Xuxiang Jiang, Yinhao Xiao, Jun Wang, Wei zhang
Vulnerability identification is crucial for cyber security in the software-related industry.
no code implementations • 8 Aug 2022 • Xin Liu, Wei Tao, Wei Li, Dazhi Zhan, Jun Wang, Zhisong Pan
Due to its simplicity and efficiency, the first-order gradient method has been extensively employed in training neural networks.
1 code implementation • 4 Aug 2022 • Zhilei Chen, Honghua Chen, Lina Gong, Xuefeng Yan, Jun Wang, Yanwen Guo, Jing Qin, Mingqiang Wei
High-confidence overlap prediction and accurate correspondences are critical for cutting-edge models to align paired point clouds in a partial-to-partial manner.
1 code implementation • 3 Aug 2022 • Jun Wang, Mingfei Gao, Yuqian Hu, Ramprasaath R. Selvaraju, Chetan Ramaiah, ran Xu, Joseph F. JaJa, Larry S. Davis
To address this deficiency, we develop a new method to generate high-quality and diverse QA pairs by explicitly utilizing the existing rich text available in the scene context of each image.
no code implementations • 2 Aug 2022 • Jakub Grudzien Kuba, Xidong Feng, Shiyao Ding, Hao Dong, Jun Wang, Yaodong Yang
The necessity for cooperation among intelligent machines has popularised cooperative multi-agent reinforcement learning (MARL) in the artificial intelligence (AI) research community.
no code implementations • 1 Aug 2022 • Zhe Zhu, Liangliang Nan, Haoran Xie, Honghua Chen, Mingqiang Wei, Jun Wang, Jing Qin
The first module transfers the intrinsic shape characteristics from single images to guide the geometry generation of the missing regions of point clouds, in which we propose IPAdaIN to embed the global features of both the image and the partial point cloud into completion.
Ranked #2 on Point Cloud Completion on ShapeNet-ViPC
no code implementations • 26 Jul 2022 • Zeren Huang, WenHao Chen, Weinan Zhang, Chuhan Shi, Furui Liu, Hui-Ling Zhen, Mingxuan Yuan, Jianye Hao, Yong Yu, Jun Wang
Deriving a good variable selection strategy in branch-and-bound is essential for the efficiency of modern mixed-integer programming (MIP) solvers.
2 code implementations • 13 Jul 2022 • Yali Du, Chengdong Ma, Yuchen Liu, Runji Lin, Hao Dong, Jun Wang, Yaodong Yang
Reinforcement learning algorithms require a large amount of samples; this often limits their real-world applications on even simple tasks.
1 code implementation • 11 Jul 2022 • Jun Wang, Abhir Bhalerao, Yulan He
Radiology report generation (RRG) aims to describe automatically a radiology image with human-like language and could potentially support the work of radiologists, reducing the burden of manual reporting.
1 code implementation • 7 Jul 2022 • Chengfeng Zhou, Songchang Chen, Chenming Xu, Jun Wang, Feng Liu, Chun Zhang, Juan Ye, Hefeng Huang, Dahong Qian
In this study, we present a novel normalization technique called window normalization (WIN) to improve the model generalization on heterogeneous medical images, which is a simple yet effective alternative to existing normalization methods.
no code implementations • 2 Jul 2022 • Honghua Chen, Zeyong Wei, Yabin Xu, Mingqiang Wei, Jun Wang
Low-overlap regions between paired point clouds make the captured features very low-confidence, leading cutting edge models to point cloud registration with poor quality.
1 code implementation • 9 Jun 2022 • Mingqiang Wei, Zeyong Wei, Haoran Zhou, Fei Hu, Huajian Si, Zhilei Chen, Zhe Zhu, Jingbo Qiu, Xuefeng Yan, Yanwen Guo, Jun Wang, Jing Qin
In this paper, we propose Adaptive Graph Convolution (AGConv) for wide applications of point cloud analysis.
1 code implementation • 6 Jun 2022 • Aivar Sootla, Alexander I. Cowen-Rivers, Jun Wang, Haitham Bou Ammar
We further show that Simmer can stabilize training and improve the performance of safe RL with average constraints.
no code implementations • 31 May 2022 • David Mguni, Aivar Sootla, Juliusz Ziomek, Oliver Slumbers, Zipeng Dai, Kun Shao, Jun Wang
In this paper, we introduce a reinforcement learning (RL) framework named \textbf{L}earnable \textbf{I}mpulse \textbf{C}ontrol \textbf{R}einforcement \textbf{A}lgorithm (LICRA), for learning to optimally select both when to act and which actions to take when actions incur costs.
no code implementations • 31 May 2022 • Jun Shi, Yuanming Zhang, Zheng Li, Xiangmin Han, Saisai Ding, Jun Wang, Shihui Ying
In this work, we propose a pseudo-data based self-supervised federated learning (FL) framework, named SSL-FT-BT, to improve both the diagnostic accuracy and generalization of CAD models.
1 code implementation • 30 May 2022 • Muning Wen, Jakub Grudzien Kuba, Runji Lin, Weinan Zhang, Ying Wen, Jun Wang, Yaodong Yang
In this paper, we introduce a novel architecture named Multi-Agent Transformer (MAT) that effectively casts cooperative multi-agent reinforcement learning (MARL) into SM problems wherein the task is to map agents' observation sequence to agents' optimal action sequence.
no code implementations • 30 May 2022 • Oliver Slumbers, David Henry Mguni, Stephen Marcus McAleer, Stefano B. Blumberg, Jun Wang, Yaodong Yang
Although there are equilibrium concepts in game theory that take into account risk aversion, they either assume that agents are risk-neutral with respect to the uncertainty caused by the actions of other agents, or they are not guaranteed to exist.
no code implementations • 30 May 2022 • Changmin Yu, David Mguni, Dong Li, Aivar Sootla, Jun Wang, Neil Burgess
Efficient reinforcement learning (RL) involves a trade-off between "exploitative" actions that maximise expected reward and "explorative'" ones that sample unvisited states.