1 code implementation • NeurIPS 2023 • Benjamin Scellier, Maxence Ernoult, Jack Kendall, Suhas Kumar
Additionally, we establish new SOTA results with DCHNs on all five datasets, both in performance and speed.
no code implementations • 9 Mar 2021 • Jack Kendall
We propose a method for extending the technique of equilibrium propagation for estimating gradients in fixed-point neural networks to the more general setting of directed, time-varying neural networks by modeling them as electrical circuits.
no code implementations • 2 Jun 2020 • Jack Kendall, Ross Pantone, Kalpana Manickavasagam, Yoshua Bengio, Benjamin Scellier
We introduce a principled method to train end-to-end analog neural networks by stochastic gradient descent.