Search Results for author: Stephen Whitelam

Found 14 papers, 4 papers with code

Oscillatrons: neural units with time-dependent multifunctionality

no code implementations23 Apr 2024 Stephen Whitelam

We show that the dynamics of an underdamped harmonic oscillator can perform multifunctional computation, solving distinct problems at distinct times within a dynamical trajectory.

How to train your demon to do fast information erasure without heat production

no code implementations17 May 2023 Stephen Whitelam

Time-dependent protocols that perform irreversible logical operations, such as memory erasure, cost work and produce heat, placing bounds on the efficiency of computers.

Demon in the machine: learning to extract work and absorb entropy from fluctuating nanosystems

1 code implementation20 Nov 2022 Stephen Whitelam

We use Monte Carlo and genetic algorithms to train neural-network feedback-control protocols for simulated fluctuating nanosystems.

Training neural networks using Metropolis Monte Carlo and an adaptive variant

1 code implementation16 May 2022 Stephen Whitelam, Viktor Selin, Ian Benlolo, Corneel Casert, Isaac Tamblyn

We examine the zero-temperature Metropolis Monte Carlo algorithm as a tool for training a neural network by minimizing a loss function.

Cellular automata can classify data by inducing trajectory phase coexistence

no code implementations10 Mar 2022 Stephen Whitelam, Isaac Tamblyn

We show that cellular automata can classify data by inducing a form of dynamical phase coexistence.

Learning stochastic dynamics and predicting emergent behavior using transformers

1 code implementation17 Feb 2022 Corneel Casert, Isaac Tamblyn, Stephen Whitelam

We show that a neural network originally designed for language processing can learn the dynamical rules of a stochastic system by observation of a single dynamical trajectory of the system, and can accurately predict its emergent behavior under conditions not observed during training.

Neuroevolutionary learning of particles and protocols for self-assembly

no code implementations22 Dec 2020 Stephen Whitelam, Isaac Tamblyn

Within simulations of molecules deposited on a surface we show that neuroevolutionary learning can design particles and time-dependent protocols to promote self-assembly, without input from physical concepts such as thermal equilibrium or mechanical stability and without prior knowledge of candidate or competing structures.

Varied phenomenology of models displaying dynamical large-deviation singularities

no code implementations16 Dec 2020 Stephen Whitelam, Daniel Jacobson

Singularities of dynamical large-deviation functions are often interpreted as the signal of a dynamical phase transition and the coexistence of distinct dynamical phases, by analogy with the correspondence between singularities of free energies and equilibrium phase behavior.

Statistical Mechanics

Dynamical large deviations of two-dimensional kinetically constrained models using a neural-network state ansatz

no code implementations17 Nov 2020 Corneel Casert, Tom Vieijra, Stephen Whitelam, Isaac Tamblyn

We use a neural network ansatz originally designed for the variational optimization of quantum systems to study dynamical large deviations in classical ones.

Correspondence between neuroevolution and gradient descent

no code implementations15 Aug 2020 Stephen Whitelam, Viktor Selin, Sang-Won Park, Isaac Tamblyn

We show analytically that training a neural network by conditioned stochastic mutation or neuroevolution of its weights is equivalent, in the limit of small mutations, to gradient descent on the loss function in the presence of Gaussian white noise.

Learning to grow: control of material self-assembly using evolutionary reinforcement learning

no code implementations18 Dec 2019 Stephen Whitelam, Isaac Tamblyn

We show that neural networks trained by evolutionary reinforcement learning can enact efficient molecular self-assembly protocols.

reinforcement-learning Reinforcement Learning (RL)

Evolutionary reinforcement learning of dynamical large deviations

no code implementations2 Sep 2019 Stephen Whitelam, Daniel Jacobson, Isaac Tamblyn

We show how to calculate the likelihood of dynamical large deviations using evolutionary reinforcement learning.

reinforcement-learning Reinforcement Learning (RL)

Improving the accuracy of nearest-neighbor classification using principled construction and stochastic sampling of training-set centroids

no code implementations7 Sep 2018 Stephen Whitelam

We use the MNIST and Fashion-MNIST data sets to show that a principled coarse-graining algorithm can convert training images into fewer image centroids without loss of accuracy of classification of test-set images by nearest-neighbor classification.

Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.