Online Learning of Graph Neural Networks: When Can Data Be Permanently Deleted

1 Jan 2021  ·  Lukas Paul Achatius Galke, Benedikt Franke, Tobias Zielke, Ansgar Scherp ·

Online learning of graph neural networks (GNNs) faces the challenges of distribution shift and ever growing and changing training data, when temporal graphs evolve over time. This makes it inefficient to train over the complete graph whenever new data arrives. Deleting old data at some point in time may be preferable to maintain a good performance and to account for distribution shift. We systematically analyze these issues by incrementally training and evaluating GNNs in a sliding window over temporal graphs. We experiment with three representative GNN architectures and two scalable GNN techniques, on three new datasets. In our experiments, the GNNs face the challenge that new vertices, edges, and even classes appear and disappear over time. Our results show that no more than 50% of the GNN's receptive field is necessary to retain at least 95% accuracy compared to training over a full graph. In most cases, i.e., 15 out 18 experiments, we even observe that a temporal window of size 1 is sufficient to retain at least 90%.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here