no code implementations • 6 Feb 2024 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Akiko Takeda
We provide convergence and complexity analysis for the proposed hypergradient descent algorithm on manifolds.
no code implementations • 26 Jan 2024 • Dai Shi, Andi Han, Lequan Lin, Yi Guo, Zhiyong Wang, Junbin Gao
Physics-informed Graph Neural Networks have achieved remarkable performance in learning through graph-structured data by mitigating common GNN challenges such as over-smoothing, over-squashing, and heterophily adaption.
no code implementations • 16 Jan 2024 • Lequan Lin, Dai Shi, Andi Han, Junbin Gao
Our method generates the Fourier representation of future time series, transforming the learning process into the spectral domain enriched with spatial information.
no code implementations • 13 Nov 2023 • Dai Shi, Andi Han, Lequan Lin, Yi Guo, Junbin Gao
Graph-based message-passing neural networks (MPNNs) have achieved remarkable success in both node and graph-level learning tasks.
no code implementations • 16 Oct 2023 • Andi Han, Dai Shi, Lequan Lin, Junbin Gao
Such a scheme has been found to be intrinsically linked to a physical process known as heat diffusion, where the propagation of GNNs naturally corresponds to the evolution of heat density.
no code implementations • 6 Sep 2023 • Zhiqi Shao, Dai Shi, Andi Han, Yi Guo, Qibin Zhao, Junbin Gao
To explore more flexible filtering conditions, we further generalize MHKG into a model termed G-MHKG and thoroughly show the roles of each element in controlling over-smoothing, over-squashing and expressive power.
1 code implementation • 27 Oct 2022 • Zhiqi Shao, Andi Han, Dai Shi, Andrey Vasnev, Junbin Gao
This paper introduces a novel Framelet Graph approach based on p-Laplacian GNN.
1 code implementation • 10 Oct 2022 • Saiteja Utpala, Andi Han, Pratik Jawanpuria, Bamdev Mishra
We present Rieoptax, an open source Python library for Riemannian optimization in JAX.
no code implementations • 8 Oct 2022 • Andi Han, Dai Shi, Zhiqi Shao, Junbin Gao
In this work, we provide a theoretical understanding of the framelet-based graph neural networks through the perspective of energy gradient flow.
no code implementations • 13 Aug 2022 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Junbin Gao
In this paper, we propose a simple acceleration scheme for Riemannian gradient methods by extrapolating iterates on manifolds.
1 code implementation • 19 May 2022 • Chunya Zou, Andi Han, Lequan Lin, Junbin Gao
In this paper, we propose a simple yet effective graph neural network for directed graphs (digraph) based on the classic Singular Value Decomposition (SVD), named SVD-GCN.
no code implementations • 19 May 2022 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Junbin Gao
We introduce a framework of differentially private Riemannian optimization by adding noise to the Riemannian gradient on the tangent space.
no code implementations • 25 Apr 2022 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Pawan Kumar, Junbin Gao
In this paper, we study min-max optimization problems on Riemannian manifolds.
1 code implementation • 30 Jan 2022 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Junbin Gao
In this work, we study the optimal transport (OT) problem between symmetric positive definite (SPD) matrix-valued measures.
1 code implementation • 20 Oct 2021 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Junbin Gao
Learning with symmetric positive definite (SPD) matrices has many applications in machine learning.
no code implementations • 3 Jun 2021 • Dai Shi, Andi Han, Yi Guo, Junbin Gao
In this work, we investigate the validity of learning results of some widely used DR and ManL methods through the chart mapping function of a manifold.
1 code implementation • NeurIPS 2021 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Junbin Gao
We build on this to show that the BW metric is a more suitable and robust choice for several Riemannian optimization problems over ill-conditioned SPD matrices.
no code implementations • 23 Oct 2020 • Andi Han, Junbin Gao
In this paper, we propose a variant of Riemannian stochastic recursive gradient method that can achieve second-order convergence guarantee and escape saddle points using simple perturbation.
no code implementations • 11 Aug 2020 • Andi Han, Junbin Gao
We propose a stochastic recursive momentum method for Riemannian non-convex optimization that achieves a near-optimal complexity of $\tilde{\mathcal{O}}(\epsilon^{-3})$ to find $\epsilon$-approximate solution with one sample.
no code implementations • 3 Jul 2020 • Andi Han, Junbin Gao
Variance reduction techniques are popular in accelerating gradient descent and stochastic gradient descent for optimization problems defined on both Euclidean space and Riemannian manifold.