no code implementations • 16 Aug 2020 • Brian Gardner, André Grüning
The proposed learning rule supports multiple spikes fired by stochastic hidden neurons, and yet is stable by relying on first-spike responses generated by a deterministic output layer.
no code implementations • 27 Jul 2020 • Andrew Stephan, Brian Gardner, Steven J. Koester, Andre Gruning
In this work we propose a new supervised learning method for temporally-encoded multilayer spiking networks to perform classification.
no code implementations • 2 Oct 2019 • Hyeryung Jang, Osvaldo Simeone, Brian Gardner, André Grüning
The sparsity of the synaptic spiking inputs and the corresponding event-driven nature of neural processing can be leveraged by energy-efficient hardware implementations, which can offer significant energy reductions as compared to conventional artificial neural networks (ANNs).
no code implementations • 10 Dec 2018 • Hyeryung Jang, Osvaldo Simeone, Brian Gardner, André Grüning
This paper aims at providing an introduction to SNNs by focusing on a probabilistic signal processing methodology that enables the direct derivation of learning rules leveraging the unique time encoding capabilities of SNNs.
no code implementations • 14 Jan 2016 • Brian Gardner, André Grüning
We also find FILT to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision.
no code implementations • 31 Mar 2015 • Brian Gardner, Ioana Sporea, André Grüning
Information encoding in the nervous system is supported through the precise spike-timings of neurons; however, an understanding of the underlying processes by which such representations are formed in the first place remains unclear.