no code implementations • 26 Feb 2024 • Weilin Cong, Jian Kang, Hanghang Tong, Mehrdad Mahdavi
Temporal Graph Learning (TGL) has become a prevalent technique across diverse real-world applications, especially in domains where data can be represented as a graph and evolves over time.
1 code implementation • 7 Jun 2023 • Xiao Lin, Jian Kang, Weilin Cong, Hanghang Tong
Fairness in graph neural networks has been actively studied recently.
no code implementations • 22 Feb 2023 • Weilin Cong, Si Zhang, Jian Kang, Baichuan Yuan, Hao Wu, Xin Zhou, Hanghang Tong, Mehrdad Mahdavi
Recurrent neural network (RNN) and self-attention mechanism (SAM) are the de facto methods to extract spatial-temporal information for temporal graph learning.
no code implementations • 17 Feb 2023 • Weilin Cong, Mehrdad Mahdavi
As privacy protection receives much attention, unlearning the effect of a specific node from a pre-trained graph learning model has become equally important.
no code implementations • 19 Nov 2021 • Weilin Cong, Yanhong Wu, Yuandong Tian, Mengting Gu, Yinglong Xia, Chun-cheng Jason Chen, Mehrdad Mahdavi
To achieve efficient and scalable training, we propose temporal-union graph structure and its associated subgraph-based node sampling strategy.
no code implementations • ICLR 2022 • Morteza Ramezani, Weilin Cong, Mehrdad Mahdavi, Mahmut T. Kandemir, Anand Sivasubramaniam
To solve the performance degradation, we propose to apply $\text{{Global Server Corrections}}$ on the server to refine the locally learned models.
1 code implementation • NeurIPS 2021 • Weilin Cong, Morteza Ramezani, Mehrdad Mahdavi
Graph Convolutional Networks (GCNs) are known to suffer from performance degradation as the number of layers increases, which is usually attributed to over-smoothing.
1 code implementation • 3 Mar 2021 • Weilin Cong, Morteza Ramezani, Mehrdad Mahdavi
In this paper, we describe and analyze a general doubly variance reduction schema that can accelerate any sampling method under the memory budget.
no code implementations • 1 Jan 2021 • Weilin Cong, Morteza Ramezani, Mehrdad Mahdavi
In this paper, we describe and analyze a general \textbf{\textit{doubly variance reduction}} schema that can accelerate any sampling method under the memory budget.
no code implementations • NeurIPS 2020 • Morteza Ramezani, Weilin Cong, Mehrdad Mahdavi, Anand Sivasubramaniam, Mahmut Kandemir
Sampling-based methods promise scalability improvements when paired with stochastic gradient descent in training Graph Convolutional Networks (GCNs).
no code implementations • 24 Jun 2020 • Weilin Cong, Rana Forsati, Mahmut Kandemir, Mehrdad Mahdavi
In this paper, we theoretically analyze the variance of sampling methods and show that, due to the composite structure of empirical risk, the variance of any sampling method can be decomposed into \textit{embedding approximation variance} in the forward stage and \textit{stochastic gradient variance} in the backward stage that necessities mitigating both types of variance to obtain faster convergence rate.
no code implementations • 20 Nov 2018 • Weilin Cong, William Wang, Wang-Chien Lee
Scene graph, a graph representation of images that captures object instances and their relationships, offers a comprehensive understanding of an image.
no code implementations • 28 Jul 2017 • Weilin Cong, Sanyuan Zhao, Hui Tian, Jianbing Shen
Real-world face detection and alignment demand an advanced discriminative model to address challenges by pose, lighting and expression.