no code implementations • 23 Apr 2024 • Stephen Whitelam
We show that the dynamics of an underdamped harmonic oscillator can perform multifunctional computation, solving distinct problems at distinct times within a dynamical trajectory.
no code implementations • 17 May 2023 • Stephen Whitelam
Time-dependent protocols that perform irreversible logical operations, such as memory erasure, cost work and produce heat, placing bounds on the efficiency of computers.
1 code implementation • 20 Nov 2022 • Stephen Whitelam
We use Monte Carlo and genetic algorithms to train neural-network feedback-control protocols for simulated fluctuating nanosystems.
1 code implementation • 16 May 2022 • Stephen Whitelam, Viktor Selin, Ian Benlolo, Corneel Casert, Isaac Tamblyn
We examine the zero-temperature Metropolis Monte Carlo algorithm as a tool for training a neural network by minimizing a loss function.
no code implementations • 10 Mar 2022 • Stephen Whitelam, Isaac Tamblyn
We show that cellular automata can classify data by inducing a form of dynamical phase coexistence.
1 code implementation • 17 Feb 2022 • Corneel Casert, Isaac Tamblyn, Stephen Whitelam
We show that a neural network originally designed for language processing can learn the dynamical rules of a stochastic system by observation of a single dynamical trajectory of the system, and can accurately predict its emergent behavior under conditions not observed during training.
no code implementations • 22 Dec 2020 • Stephen Whitelam, Isaac Tamblyn
Within simulations of molecules deposited on a surface we show that neuroevolutionary learning can design particles and time-dependent protocols to promote self-assembly, without input from physical concepts such as thermal equilibrium or mechanical stability and without prior knowledge of candidate or competing structures.
no code implementations • 16 Dec 2020 • Stephen Whitelam, Daniel Jacobson
Singularities of dynamical large-deviation functions are often interpreted as the signal of a dynamical phase transition and the coexistence of distinct dynamical phases, by analogy with the correspondence between singularities of free energies and equilibrium phase behavior.
Statistical Mechanics
no code implementations • 17 Nov 2020 • Corneel Casert, Tom Vieijra, Stephen Whitelam, Isaac Tamblyn
We use a neural network ansatz originally designed for the variational optimization of quantum systems to study dynamical large deviations in classical ones.
no code implementations • 15 Aug 2020 • Stephen Whitelam, Viktor Selin, Sang-Won Park, Isaac Tamblyn
We show analytically that training a neural network by conditioned stochastic mutation or neuroevolution of its weights is equivalent, in the limit of small mutations, to gradient descent on the loss function in the presence of Gaussian white noise.
no code implementations • 18 Dec 2019 • Stephen Whitelam, Isaac Tamblyn
We show that neural networks trained by evolutionary reinforcement learning can enact efficient molecular self-assembly protocols.
no code implementations • 2 Sep 2019 • Stephen Whitelam, Daniel Jacobson, Isaac Tamblyn
We show how to calculate the likelihood of dynamical large deviations using evolutionary reinforcement learning.
1 code implementation • 20 Mar 2019 • Chris Beeler, Uladzimir Yahorau, Rory Coles, Kyle Mills, Stephen Whitelam, Isaac Tamblyn
Gradient-based reinforcement learning is able to learn the Stirling cycle, whereas an evolutionary approach achieves the optimal Carnot cycle.
no code implementations • 7 Sep 2018 • Stephen Whitelam
We use the MNIST and Fashion-MNIST data sets to show that a principled coarse-graining algorithm can convert training images into fewer image centroids without loss of accuracy of classification of test-set images by nearest-neighbor classification.