Search Results for author: Animesh Koratana

Found 2 papers, 1 papers with code

LIT: Learned Intermediate Representation Training for Model Compression

1 code implementation4 Sep 2019 Animesh Koratana, Daniel Kang, Peter Bailis, Matei Zaharia

In this work, we introduce Learned Intermediate representation Training (LIT), a novel model compression technique that outperforms a range of recent model compression techniques by leveraging the highly repetitive structure of modern DNNs (e. g., ResNet).

Image Classification Model Compression +2

LIT: Block-wise Intermediate Representation Training for Model Compression

no code implementations ICLR 2019 Animesh Koratana, Daniel Kang, Peter Bailis, Matei Zaharia

Knowledge distillation (KD) is a popular method for reducing the computational overhead of deep network inference, in which the output of a teacher model is used to train a smaller, faster student model.

Knowledge Distillation Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.