Search Results for author: Jiong Zhu

Found 7 papers, 5 papers with code

Graph Coarsening via Convolution Matching for Scalable Graph Neural Network Training

1 code implementation24 Dec 2023 Charles Dickens, Eddie Huang, Aishwarya Reganti, Jiong Zhu, Karthik Subbian, Danai Koutra

Notably, CONVMATCH achieves up to 95% of the prediction performance of GNNs on node classification while trained on graphs summarized down to 1% the size of the original graph.

Link Prediction Node Classification

On Performance Discrepancies Across Local Homophily Levels in Graph Neural Networks

no code implementations8 Jun 2023 Donald Loveland, Jiong Zhu, Mark Heimann, Benjamin Fish, Michael T. Schaub, Danai Koutra

We ground the practical implications of this work through granular analysis on five real-world datasets with varying global homophily levels, demonstrating that (a) GNNs can fail to generalize to test nodes that deviate from the global homophily of a graph, and (b) high local homophily does not necessarily confer high performance for a node.

Node Classification

Simplifying Distributed Neural Network Training on Massive Graphs: Randomized Partitions Improve Model Aggregation

1 code implementation17 May 2023 Jiong Zhu, Aishwarya Reganti, Edward Huang, Charles Dickens, Nikhil Rao, Karthik Subbian, Danai Koutra

Backed by our theoretical analysis, instead of maximizing the recovery of cross-instance node dependencies -- which has been considered the key behind closing the performance gap between model aggregation and centralized training -- , our framework leverages randomized assignment of nodes or super-nodes (i. e., collections of original nodes) to partition the training graph such that it improves data uniformity and minimizes the discrepancy of gradient and loss function across instances.

On Graph Neural Network Fairness in the Presence of Heterophilous Neighborhoods

no code implementations10 Jul 2022 Donald Loveland, Jiong Zhu, Mark Heimann, Ben Fish, Michael T. Schaub, Danai Koutra

We study the task of node classification for graph neural networks (GNNs) and establish a connection between group fairness, as measured by statistical parity and equal opportunity, and local assortativity, i. e., the tendency of linked nodes to have similar attributes.

Attribute Fairness +1

How does Heterophily Impact the Robustness of Graph Neural Networks? Theoretical Connections and Practical Implications

1 code implementation14 Jun 2021 Jiong Zhu, Junchen Jin, Donald Loveland, Michael T. Schaub, Danai Koutra

We bridge two research directions on graph neural networks (GNNs), by formalizing the relation between heterophily of node labels (i. e., connected nodes tend to have dissimilar labels) and the robustness of GNNs to adversarial attacks.

Graph Neural Networks with Heterophily

1 code implementation28 Sep 2020 Jiong Zhu, Ryan A. Rossi, Anup Rao, Tung Mai, Nedim Lipka, Nesreen K. Ahmed, Danai Koutra

Graph Neural Networks (GNNs) have proven to be useful for many different practical applications.

Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs

4 code implementations NeurIPS 2020 Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, Danai Koutra

We investigate the representation power of graph neural networks in the semi-supervised node classification task under heterophily or low homophily, i. e., in networks where connected nodes may have different class labels and dissimilar features.

Node Classification on Non-Homophilic (Heterophilic) Graphs

Cannot find the paper you are looking for? You can Submit a new open access paper.