Search Results for author: Nam Trung Pham

Found 2 papers, 1 papers with code

Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay

1 code implementation9 Jan 2022 Kuluhan Binici, Shivam Aggarwal, Nam Trung Pham, Karianto Leman, Tulika Mitra

In particular, we design a Variational Autoencoder (VAE) with a training objective that is customized to learn the synthetic data representations optimally.

Data-free Knowledge Distillation Image Classification +1

Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data

no code implementations11 Aug 2021 Kuluhan Binici, Nam Trung Pham, Tulika Mitra, Karianto Leman

Moreover, the sample generation strategies in some of these methods could result in a mismatch between the synthetic and real data distributions.

Knowledge Distillation Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.