no code implementations • 13 Sep 2021 • Alexander Bastounis, Anders C Hansen, Verner Vlačić
Our paper addresses why there has been no solution to the problem, as we prove the following mathematical paradox: any training procedure based on training neural networks for classification problems with a fixed architecture will yield neural networks that are either inaccurate or unstable (if accurate) -- despite the provable existence of both accurate and stable neural networks for the same classification problems.
no code implementations • 21 Jun 2020 • Verner Vlačić, Helmut Bölcskei
In an effort to answer the identifiability question in greater generality, we consider arbitrary nonlinearities with potentially complicated affine symmetries, and we show that the symmetries can be used to find a rich set of networks giving rise to the same function $f$.
no code implementations • 11 Jun 2019 • Verner Vlačić, Helmut Bölcskei
In an effort to answer the identifiability question in greater generality, we derive necessary genericity conditions for the identifiability of neural networks of arbitrary depth and connectivity with an arbitrary nonlinearity.