no code implementations • 17 Jun 2022 • Narsimha Chilkuri, Chris Eliasmith
In this report we consider the following problem: Given a trained model that is partially faulty, can we correct its behaviour without having to train the model from scratch?
no code implementations • 5 Oct 2021 • Narsimha Chilkuri, Eric Hunsberger, Aaron Voelker, Gurshaant Malik, Chris Eliasmith
Over three orders of magnitude, we show that our new architecture attains the same accuracy as transformers with 10x fewer tokens.
2 code implementations • 22 Feb 2021 • Narsimha Chilkuri, Chris Eliasmith
For instance, our LMU sets a new state-of-the-art result on psMNIST, and uses half the parameters while outperforming DistilBERT and LSTM models on IMDB sentiment analysis.
Ranked #6 on Sequential Image Classification on Sequential MNIST