no code implementations • 2 Jul 2023 • Kaituo Feng, Yikun Miao, Changsheng Li, Ye Yuan, Guoren Wang
Knowledge distillation (KD) has shown to be effective to boost the performance of graph neural networks (GNNs), where the typical objective is to distill knowledge from a deeper teacher GNN into a shallower student GNN.