1 code implementation • 14 Mar 2024 • Md Atik Ahamed, Qiang Cheng
Long-term time-series forecasting remains challenging due to the difficulty in capturing long-term dependencies, achieving linear scalability, and maintaining computational efficiency.
no code implementations • 11 Feb 2024 • Xin Tong, Bo Jin, Zhi Lin, Binjun Wang, Ting Yu, Qiang Cheng
Large Language Models (LLMs) have demonstrated significant potential and effectiveness across multiple application domains.
no code implementations • 16 Jan 2024 • Md Atik Ahamed, Qiang Cheng
Tabular data remains ubiquitous across domains despite growing use of images and texts for machine learning.
Ranked #1 on Classification on Adult
no code implementations • 6 May 2023 • Hua Lan, Jinjie Hu, Zengfu Wang, Qiang Cheng
Motivated by the maneuvering target tracking with sensors such as radar and sonar, this paper considers the joint and recursive estimation of the dynamic state and the time-varying process noise covariance in nonlinear state space models.
no code implementations • 10 Jan 2023 • Shengyu Zhu, Zehua Yu, Qinghua Guo, Jinshan Ding, Qiang Cheng, Tie Jun Cui
Achieving integrated sensing and communication (ISAC) via uplink transmission is challenging due to the unknown waveform and the coupling of communication and sensing echoes.
no code implementations • 30 Sep 2022 • Xinxing Wu, Chong Peng, Richard Charnigo, Qiang Cheng
Interpreting critical variables involved in complex biological processes related to survival time can help understand prediction from survival models, evaluate treatment efficacy, and develop new therapies for patients.
1 code implementation • 25 Aug 2022 • Xinxing Wu, Chong Peng, Gregory Jicha, Donna Wilcock, Qiang Cheng
Then, we apply it to study oscillation patterns in untimed genome-wide gene expression from 19 human brain regions of controls and AD patients.
no code implementations • 25 Aug 2022 • Xinxing Wu, Chong Peng, Peter T. Nelson, Qiang Cheng
Alzheimer's disease (AD), as a progressive brain disease, affects cognition, memory, and behavior.
no code implementations • 22 Apr 2022 • Chong Peng, Yiqun Zhang, Yongyong Chen, Zhao Kang, Chenglizhao Chen, Qiang Cheng
Nonnegative matrix factorization (NMF) has been widely studied in recent years due to its effectiveness in representing nonnegative data with parts-based representations.
no code implementations • 8 Jan 2022 • Chong Peng, Yang Liu, Yongyong Chen, Xinxin Wu, Andrew Cheng, Zhao Kang, Chenglizhao Chen, Qiang Cheng
In this paper, we propose a novel nonconvex approach to robust principal component analysis for HSI denoising, which focuses on simultaneously developing more accurate approximations to both rank and column-wise sparsity for the low-rank and sparse components, respectively.
no code implementations • 4 Jun 2021 • Xinxing Wu, Qiang Cheng
Feature selection identifies subsets of informative features and reduces dimensions in the original feature space, helping provide insights into data generation or a variety of domain problems.
no code implementations • 25 May 2021 • Yang Liu, Qian Zhang, Yongyong Chen, Qiang Cheng, Chong Peng
It is a challenging task to remove heavy and mixed types of noise from Hyperspectral images (HSIs).
1 code implementation • 21 Mar 2021 • Xinxing Wu, Qiang Cheng
Graph neural networks have been used for a variety of learning tasks, such as link prediction, node classification, and node clustering.
no code implementations • 5 Mar 2021 • Wankai Tang, Xiangyu Chen, Ming Zheng Chen, Jun Yan Dai, Yu Han, Shi Jin, Qiang Cheng, Geoffrey Ye Li, Tie Jun Cui
Channel reciprocity greatly facilitates downlink precoding in time-division duplexing (TDD) multiple-input multiple-output (MIMO) communications without the need for channel state information (CSI) feedback.
Information Theory Information Theory
no code implementations • 21 Jan 2021 • Wankai Tang, Xiangyu Chen, Ming Zheng Chen, Jun Yan Dai, Yu Han, Marco Di Renzo, Shi Jin, Qiang Cheng, Tie Jun Cui
The refined model gives more accurate estimates of the path loss of RISs comprised of unit cells with a deep sub-wavelength size.
1 code implementation • CVPR 2021 • Vasily Zadorozhnyy, Qiang Cheng, Qiang Ye
Generative adversarial network (GAN) has become one of the most important neural network models for classical unsupervised machine learning.
Ranked #4 on Conditional Image Generation on CIFAR-100
Conditional Image Generation Generative Adversarial Network +1
no code implementations • 3 Nov 2020 • Chong Peng, Qian Zhang, Zhao Kang, Chenglizhao Chen, Qiang Cheng
It directly uses 2D data as inputs such that the learning of representations benefits from inherent structures and relationships of the data.
1 code implementation • NeurIPS 2021 • Xinxing Wu, Qiang Cheng
Feature selection, as a vital dimension reduction technique, reduces data dimension by identifying an essential subset of input features, which can facilitate interpretable insights into learning and inference processes.
1 code implementation • 19 Oct 2020 • Xinxing Wu, Qiang Cheng
In this paper, we propose an innovative framework for unsupervised feature selection, called fractal autoencoders (FAE).
no code implementations • 31 Aug 2020 • Zhao Kang, Chong Peng, Qiang Cheng, Xinwang Liu, Xi Peng, Zenglin Xu, Ling Tian
Furthermore, most existing graph-based methods conduct clustering and semi-supervised classification on the graph learned from the original data matrix, which doesn't have explicit cluster structure, thus they might not achieve the optimal performance.
no code implementations • 19 May 2020 • Chong Peng, Zhilu Zhang, Zhao Kang, Chenglizhao Chen, Qiang Cheng
In particular, projection matrices are sought under the guidance of building new data representations, such that the spatial information is retained and projections are enhanced by the goal of clustering, which helps construct optimal projection directions.
no code implementations • 20 Dec 2019 • Wankai Tang, Jun Yan Dai, Ming Zheng Chen, Kai-Kit Wong, Xiao Li, Xinsheng Zhao, Shi Jin, Qiang Cheng, Tie Jun Cui
Reconfigurable intelligent surface (RIS) is a new paradigm that has great potential to achieve cost-effective, energy-efficient information modulation for wireless transmission, by the ability to change the reflection coefficients of the unit cells of a programmable metasurface.
no code implementations • 13 Nov 2019 • Wankai Tang, Ming Zheng Chen, Xiangyu Chen, Jun Yan Dai, Yu Han, Marco Di Renzo, Yong Zeng, Shi Jin, Qiang Cheng, Tie Jun Cui
The measurement results match well with the modeling results, thus validating the proposed free-space path loss models for RIS, which may pave the way for further theoretical studies and practical applications in this field.
no code implementations • 9 Jul 2019 • Chong Peng, Zhao Kang, Chenglizhao Chen, Qiang Cheng
Existing nonnegative matrix factorization methods focus on learning global structure of the data to construct basis and coefficient matrices, which ignores the local structure that commonly exists among data.
no code implementations • 5 Jun 2019 • Xinghua Yao, Qiang Cheng, Guo-Qiang Zhang
In order to capture essential seizure features, this paper integrates an emerging deep learning model, the independently recurrent neural network (IndRNN), with a dense structure and an attention mechanism to exploit temporal and spatial discriminating features and overcome seizure variabilities.
no code implementations • CVPR 2019 • Chong Peng, Chenglizhao Chen, Zhao Kang, Jianbo Li, Qiang Cheng
This drawback has limited the application of RPCA in solving real world problems.
no code implementations • 16 Apr 2019 • Chong Peng, Qiang Cheng
As a special case we focus on a quadratic model that admits a closed-form analytical solution.
no code implementations • 22 Mar 2019 • Xinghua Yao, Qiang Cheng, Guo-Qiang Zhang
In current clinical practices, electroencephalograms (EEG) are reviewed and analyzed by trained neurologists to provide supports for therapeutic decisions.
no code implementations • 28 Jan 2019 • Liyu Gong, Qiang Cheng
Moreover, we derive an intrinsic loss for UTDAT Lie group which can be calculated as l-2 loss in the tangent space.
no code implementations • 7 Sep 2018 • Liyu Gong, Qiang Cheng
The proposed framework can consolidate current graph neural network models; e. g. graph convolutional networks (GCN) and graph attention networks (GAT).
1 code implementation • 12 Nov 2017 • Zhao Kang, Chong Peng, Qiang Cheng, Zenglin Xu
Second, the discrete solution may deviate from the spectral solution since k-means method is well-known as sensitive to the initialization of cluster centers.
no code implementations • CVPR 2017 • Chong Peng, Zhao Kang, Qiang Cheng
Spectral clustering based subspace clustering methods have emerged recently.
no code implementations • 1 May 2017 • Zhao Kang, Chong Peng, Qiang Cheng
Thus the learned similarity matrix is often not suitable, let alone optimal, for the subsequent clustering.
1 code implementation • 27 Sep 2016 • Zhao Kang, Chong Peng, Ming Yang, Qiang Cheng
To alleviate this problem, this paper proposes a simple recommendation algorithm that fully exploits the similarity information among users and items and intrinsic structural information of the user-item matrix.
1 code implementation • 25 Feb 2016 • Zhao Kang, Qiang Cheng
Some empirical results demonstrate that it can provide a better approximation to original problems than convex relaxation.
no code implementations • 19 Jan 2016 • Zhao Kang, Chong Peng, Qiang Cheng
Top-N recommender systems have been investigated widely both in industry and academia.
1 code implementation • 17 Nov 2015 • Zhao Kang, Chong Peng, Qiang Cheng
This approximation to the matrix rank is tighter than the nuclear norm.
1 code implementation • 30 Oct 2015 • Zhao Kang, Chong Peng, Qiang Cheng
For this nonconvex minimization problem, we develop an effective optimization procedure based on a type of augmented Lagrange multipliers (ALM) method.
1 code implementation • 18 Aug 2015 • Zhao Kang, Chong Peng, Qiang Cheng
However, for many real-world applications, nuclear norm approximation to the rank function can only produce a result far from the optimum.
no code implementations • NeurIPS 2013 • Qiang Cheng, Qiang Liu, Feng Chen, Alexander T. Ihler
The KL divergence is optimized using the belief propagation algorithm, with complexity exponential in only the cluster size of the graph.
no code implementations • NeurIPS 2010 • Hongbo Zhou, Qiang Cheng
In this paper, we propose a robust minimax framework to interpret the relationship between data and regularization terms for a large class of loss functions.