A Temporal Neural Network Architecture for Online Learning

27 Nov 2020  ·  James E. Smith ·

A long-standing proposition is that by emulating the operation of the brain's neocortex, a spiking neural network (SNN) can achieve similar desirable features: flexible learning, speed, and efficiency. Temporal neural networks (TNNs) are SNNs that communicate and process information encoded as relative spike times (in contrast to spike rates). A TNN architecture is proposed, and, as a proof-of-concept, TNN operation is demonstrated within the larger context of online supervised classification. First, through unsupervised learning, a TNN partitions input patterns into clusters based on similarity. The TNN then passes a cluster identifier to a simple online supervised decoder which finishes the classification task. The TNN learning process adjusts synaptic weights by using only signals local to each synapse, and clustering behavior emerges globally. The system architecture is described at an abstraction level analogous to the gate and register transfer levels in conventional digital design. Besides features of the overall architecture, several TNN components are new to this work. Although not addressed directly, the overall research objective is a direct hardware implementation of TNNs. Consequently, all the architecture elements are simple, and processing is done at very low precision.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here