1 code implementation • 13 Sep 2023 • Gaotang Li, Jiarui Liu, Wei Hu
Neural networks produced by standard training are known to suffer from poor accuracy on rare subgroups despite achieving high accuracy on average, due to the correlations between certain spurious features and labels.
1 code implementation • 26 Jun 2023 • Gaotang Li, Marlena Duda, Xiang Zhang, Danai Koutra, Yujun Yan
Based on these insights, we propose a new model, Interpretable Graph Sparsification (IGS), which enhances graph classification performance by up to 5. 1% with 55. 0% fewer edges.
no code implementations • 24 May 2023 • Gaotang Li, Danai Koutra, Yujun Yan
Our empirical results reveal that our proposed size-insensitive attention strategy substantially enhances graph classification performance on large test graphs, which are 2-10 times larger than the training graphs, resulting in an improvement in F1 scores by up to 8%.