Search Results for author: Alexander Atanasov

Found 4 papers, 1 papers with code

A Dynamical Model of Neural Scaling Laws

no code implementations2 Feb 2024 Blake Bordelon, Alexander Atanasov, Cengiz Pehlevan

On a variety of tasks, the performance of neural networks predictably improves with training time, dataset size and model size across many orders of magnitude.

The Onset of Variance-Limited Behavior for Networks in the Lazy and Rich Regimes

1 code implementation23 Dec 2022 Alexander Atanasov, Blake Bordelon, Sabarish Sainathan, Cengiz Pehlevan

For small training set sizes $P$, the generalization error of wide neural networks is well-approximated by the error of an infinite width neural network (NN), either in the kernel or mean-field/feature-learning regime.

regression

Neural Networks as Kernel Learners: The Silent Alignment Effect

no code implementations ICLR 2022 Alexander Atanasov, Blake Bordelon, Cengiz Pehlevan

Can neural networks in the rich feature learning regime learn a kernel machine with a data-dependent kernel?

Cannot find the paper you are looking for? You can Submit a new open access paper.