ReGAE: Graph autoencoder based on recursive neural networks

28 Jan 2022  ·  Adam Małkowski, Jakub Grzechociński, Paweł Wawrzyński ·

Invertible transformation of large graphs into fixed dimensional vectors (embeddings) remains a challenge. Its overcoming would reduce any operation on graphs to an operation in a vector space. However, most existing methods are limited to graphs with tens of vertices. In this paper we address the above challenge with recursive neural networks - the encoder and the decoder. The encoder network transforms embeddings of subgraphs into embeddings of larger subgraphs, and eventually into the embedding of the input graph. The decoder does the opposite. The dimension of the embeddings is constant regardless of the size of the (sub)graphs. Simulation experiments presented in this paper confirm that our proposed graph autoencoder, ReGAE, can handle graphs with even thousands of vertices.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods