no code implementations • 11 Oct 2023 • Yun Zhu, Yaoke Wang, Haizhou Shi, Zhenshuo Zhang, Dian Jiao, Siliang Tang
These pre-trained models can be applied to various downstream Web applications, saving training time and improving downstream (target) performance.
1 code implementation • 24 Jul 2023 • Yun Zhu, Haizhou Shi, Zhenshuo Zhang, Siliang Tang
In this work, we investigate the problem of out-of-distribution (OOD) generalization for unsupervised learning methods on graph data.
no code implementations • 9 Mar 2023 • Zhenshuo Zhang, Yun Zhu, Haizhou Shi, Siliang Tang
Albeit having gained significant progress lately, large-scale graph representation learning remains expensive to train and deploy for two main reasons: (i) the repetitive computation of multi-hop message passing and non-linearity in graph neural networks (GNNs); (ii) the computational cost of complex pairwise contrastive learning loss.