1 code implementation • 6 May 2024 • Junxiang Wang, Liang Zhao
We present GraphSL, a novel library designed for investigating the graph source localization problem.
no code implementations • 19 Dec 2023 • Junxiang Wang, Guangji Bai, Wei Cheng, Zhengzhang Chen, Liang Zhao, Haifeng Chen
In order to tackle these challenges simultaneously, in this paper, we introduce PrOmpt-based domaiN Discrimination (POND), the first framework to utilize prompts for time series domain adaptation.
1 code implementation • 17 Dec 2023 • Zheng Zhang, Sirui Li, Jingcheng Zhou, Junxiang Wang, Abhinav Angirekula, Allen Zhang, Liang Zhao
Besides, existing spatial network representation learning methods can only consider networks embedded in Euclidean space, and can not well exploit the rich geometric information carried by irregular and non-uniform non-Euclidean space.
no code implementations • 30 May 2023 • Chen Ling, Xujiang Zhao, Jiaying Lu, Chengyuan Deng, Can Zheng, Junxiang Wang, Tanmoy Chowdhury, Yun Li, Hejie Cui, Xuchao Zhang, Tianjiao Zhao, Amit Panalkar, Dhagash Mehta, Stefano Pasquali, Wei Cheng, Haoyu Wang, Yanchi Liu, Zhengzhang Chen, Haifeng Chen, Chris White, Quanquan Gu, Jian Pei, Carl Yang, Liang Zhao
In this article, we present a comprehensive survey on domain specification techniques for large language models, an emerging direction critical for large language model applications.
1 code implementation • 1 May 2023 • Chen Ling, Junji Jiang, Junxiang Wang, My Thai, Lukas Xue, James Song, Meikang Qiu, Liang Zhao
Influence maximization (IM) is formulated as selecting a set of initial users from a social network to maximize the expected number of influenced users.
1 code implementation • 19 Nov 2022 • Chen Ling, Tanmoy Chowdhury, Junji Jiang, Junxiang Wang, Xuchao Zhang, Haifeng Chen, Liang Zhao
As the most well-known computational method of analogical reasoning, Structure-Mapping Theory (SMT) abstracts both target and base subjects into relational graphs and forms the cognitive process of analogical reasoning by finding a corresponding subgraph (i. e., correspondence) in the target graph that is aligned with the base graph.
1 code implementation • 24 Jun 2022 • Chen Ling, Junji Jiang, Junxiang Wang, Liang Zhao
Different from most traditional source localization methods, this paper focuses on a probabilistic manner to account for the uncertainty of different candidate sources.
1 code implementation • 18 Jun 2022 • Junxiang Wang, Junji Jiang, Liang Zhao
This paper aims to establish a generic framework of invertible graph diffusion models for source localization on graphs, namely Invertible Validity-aware Graph Diffusion (IVGD), to handle major challenges including 1) Difficulty to leverage knowledge in graph diffusion models for modeling their inverse processes in an end-to-end fashion, 2) Difficulty to ensure the validity of the inferred sources, and 3) Efficiency and scalability in source inference.
no code implementations • 22 May 2022 • Hongyi Li, Junxiang Wang, Yongchao Wang
Massive Multiple-Input Multiple-Out (MIMO) detection is an important problem in modern wireless communication systems.
no code implementations • 23 Dec 2021 • Junxiang Wang, Xuchao Zhang, Bo Zong, Yanchi Liu, Wei Cheng, Jingchao Ni, Haifeng Chen, Liang Zhao
During the past several years, a surge of multi-lingual Pre-trained Language Models (PLMs) has been proposed to achieve state-of-the-art performance in many cross-lingual downstream tasks.
1 code implementation • 22 Dec 2021 • Junxiang Wang, Hongyi Li, Liang Zhao
As a well-known optimization framework, the Alternating Direction Method of Multipliers (ADMM) has achieved tremendous success in many classification and regression applications.
1 code implementation • 20 May 2021 • Junxiang Wang, Hongyi Li, Zheng Chai, Yongchao Wang, Yue Cheng, Liang Zhao
Theoretical convergence to a (quantized) stationary point of the pdADMM-G algorithm and the pdADMM-G-Q algorithm is provided with a sublinear convergence rate $o(1/k)$, where $k$ is the number of iterations.
no code implementations • 22 Feb 2021 • Johnny Torres, Guangji Bai, Junxiang Wang, Liang Zhao, Carmen Vaca, Cristina Abad
Multi-task learning is a framework that enforces different learning tasks to share their knowledge to improve their generalization performance.
1 code implementation • 1 Nov 2020 • Junxiang Wang, Zheng Chai, Yue Cheng, Liang Zhao
In this paper, we propose a novel parallel deep learning ADMM framework (pdADMM) to achieve layer parallelism: parameters in each layer of neural networks can be updated independently in parallel.
1 code implementation • 9 Sep 2020 • Junxiang Wang, Zheng Chai, Yue Cheng, Liang Zhao
In this paper, we analyze the reason and propose to achieve a compelling trade-off between parallelism and accuracy by a reformulation called Tunable Subnetwork Splitting Method (TSSM), which can tune the decomposition granularity of deep neural networks.
no code implementations • 25 Sep 2019 • Junxiang Wang, Fuxun Yu, Xiang Chen, Liang Zhao
To overcome these drawbacks, alternating minimization-based methods for deep neural network optimization have attracted fast-increasing attention recently.
1 code implementation • 31 May 2019 • Junxiang Wang, Fuxun Yu, Xiang Chen, Liang Zhao
However, as an emerging domain, several challenges remain, including 1) The lack of global convergence guarantees, 2) Slow convergence towards solutions, and 3) Cubic time complexity with regard to feature dimensions.
no code implementations • 9 May 2017 • Junxiang Wang, Liang Zhao
The classic Alternating Direction Method of Multipliers (ADMM) is a popular framework to solve linear-equality constrained problems.
Optimization and Control Social and Information Networks