1 code implementation • 6 Feb 2024 • Yair Schiff, Zhong Yi Wan, Jeffrey B. Parker, Stephan Hoyer, Volodymyr Kuleshov, Fei Sha, Leonardo Zepeda-Núñez
Learning dynamics from dissipative chaotic systems is notoriously difficult due to their inherent instability, as formalized by their positive Lyapunov exponents, which exponentially amplify errors in the learned dynamics.
no code implementations • 13 Jun 2023 • Marc Finzi, Anudhyan Boral, Andrew Gordon Wilson, Fei Sha, Leonardo Zepeda-Núñez
In this work, we develop a probabilistic approximation scheme for the conditional score function which provably converges to the true distribution as the noise level decreases.
no code implementations • 25 Jan 2023 • Zhong Yi Wan, Leonardo Zepeda-Núñez, Anudhyan Boral, Fei Sha
We present a data-driven, space-time continuous framework to learn surrogate models for complex physical systems described by advection-dominated partial differential equations.
2 code implementations • 1 Jul 2022 • Gideon Dresdner, Dmitrii Kochkov, Peter Norgaard, Leonardo Zepeda-Núñez, Jamie A. Smith, Michael P. Brenner, Stephan Hoyer
We build upon Fourier-based spectral methods, which are known to be more efficient than other numerical schemes for simulating PDEs with smooth and periodic solutions.
no code implementations • 2 Jun 2021 • Matthew Li, Laurent Demanet, Leonardo Zepeda-Núñez
We propose an end-to-end deep learning framework that comprehensively solves the inverse wave scattering problem across all length scales.
no code implementations • 24 Nov 2020 • Matthew Li, Laurent Demanet, Leonardo Zepeda-Núñez
We introduce an end-to-end deep learning architecture called the wide-band butterfly network (WideBNet) for approximating the inverse scattering map from wide-band scattering data.
1 code implementation • 11 Oct 2020 • Yifan Peng, Lin Lin, Lexing Ying, Leonardo Zepeda-Núñez
We showcase this framework by introducing a neural network architecture that combines LRC-layers with short-range convolutional layers to accurately learn the energy and force associated with a $N$-body potential.
no code implementations • 24 Feb 2020 • Jiefu Zhang, Leonardo Zepeda-Núñez, Yuan YAO, Lin Lin
When such structural information is not available, and we may only use a dense neural network, the optimization procedure to find the sparse network embedded in the dense network is similar to finding the needle in a haystack, using a given number of samples of the function.
no code implementations • 27 Nov 2019 • Leonardo Zepeda-Núñez, Yixiao Chen, Jiefu Zhang, Weile Jia, Linfeng Zhang, Lin Lin
By directly targeting at the self-consistent electron density, we demonstrate that the adapted network architecture, called the Deep Density, can effectively represent the electron density as the linear combination of contributions from many local clusters.