no code implementations • 29 Sep 2023 • Jianke Yang, Nima Dehmamy, Robin Walters, Rose Yu
It learns a mapping from the data space to a latent space where the symmetries become linear and simultaneously discovers symmetries in the latent space.
1 code implementation • 1 Feb 2023 • Jianke Yang, Robin Walters, Nima Dehmamy, Rose Yu
Despite the success of equivariant neural networks in scientific applications, they require knowing the symmetry group a priori.
1 code implementation • 31 Oct 2022 • Bo Zhao, Iordan Ganev, Robin Walters, Rose Yu, Nima Dehmamy
Empirical studies of the loss landscape of deep networks have revealed that many local minima are connected through low-loss valleys.
no code implementations • 26 May 2022 • Nima Dehmamy, Csaba Both, Jianzhi Long, Rose Yu
In mathematical optimization, second-order Newton's methods generally converge faster than first-order methods, but they require the inverse of the Hessian, hence are computationally expensive.
1 code implementation • 21 May 2022 • Bo Zhao, Nima Dehmamy, Robin Walters, Rose Yu
Experimentally, we show that teleportation improves the convergence speed of gradient descent and AdaGrad for several optimization problems including test functions, multi-layer regressions, and MNIST classification.
no code implementations • 29 Sep 2021 • Nima Dehmamy, Csaba Both, Jianzhi Long, Rose Yu
We tackle the problem of accelerating certain optimization problems related to steady states in ODE and energy minimization problems common in physics.
1 code implementation • NeurIPS 2021 • Nima Dehmamy, Robin Walters, Yanchen Liu, Dashun Wang, Rose Yu
Existing equivariant neural networks require prior knowledge of the symmetry group and discretization for continuous groups.
no code implementations • 1 Jan 2021 • Nima Dehmamy, Yanchen Liu, Robin Walters, Rose Yu
We propose to learn the symmetries during the training of the group equivariant architectures.
no code implementations • 7 Jul 2020 • Luca Stornaiuolo, Nima Dehmamy, Albert-László Barabási, Mauro Martino
Finally, we compare the results between our approach and a baseline algorithm that directly convert the 3D shapes, without using our GAN.
no code implementations • 21 Jun 2020 • Chintan Shah, Nima Dehmamy, Nicola Perra, Matteo Chinazzi, Albert-László Barabási, Alessandro Vespignani, Rose Yu
% We observe that GNNs can identify P0 close to the theoretical bound on accuracy, without explicit input of dynamics or its parameters.
1 code implementation • NeurIPS 2019 • Nima Dehmamy, Albert-László Barabási, Rose Yu
We find that GCNs are rather restrictive in learning graph moments.
no code implementations • 14 Mar 2017 • Nima Dehmamy, Neda Rohani, Aggelos Katsaggelos
We then show that for each layer, the distribution of solutions found by SGD can be estimated using a class-based principal component analysis (PCA) of the layer's input.