Exposition and Interpretation of the Topology of Neural Networks

8 Oct 2018  ·  Rickard Brüel Gabrielsson, Gunnar Carlsson ·

Convolutional neural networks (CNN's) are powerful and widely used tools. However, their interpretability is far from ideal. One such shortcoming is the difficulty of deducing a network's ability to generalize to unseen data. We use topological data analysis to show that the information encoded in the weights of a CNN can be organized in terms of a topological data model and demonstrate how such information can be interpreted and utilized. We show that the weights of convolutional layers at depths from 1 through 13 learn simple global structures. We also demonstrate the change of the simple structures over the course of training. In particular, we define and analyze the spaces of spatial filters of convolutional layers and show the recurrence, among all networks, depths, and during training, of a simple circle consisting of rotating edges, as well as a less recurring unanticipated complex circle that combines lines, edges, and non-linear patterns. We also demonstrate that topological structure correlates with a network's ability to generalize to unseen data and that topological information can be used to improve a network's performance. We train over a thousand CNN's on MNIST, CIFAR-10, SVHN, and ImageNet.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods