Search Results for author: Yoojin Choi

Found 13 papers, 1 papers with code

Zero-Shot Learning of a Conditional Generative Adversarial Network for Data-Free Network Quantization

no code implementations26 Oct 2022 Yoojin Choi, Mostafa El-Khamy, Jungwon Lee

We propose a novel method for training a conditional generative adversarial network (CGAN) without the use of training data, called zero-shot learning of a CGAN (ZS-CGAN).

Data Free Quantization Generative Adversarial Network +1

Toward Sustainable Continual Learning: Detection and Knowledge Repurposing of Similar Tasks

no code implementations11 Oct 2022 Sijia Wang, Yoojin Choi, Junya Chen, Mostafa El-Khamy, Ricardo Henao

This results in the eventual prohibitive expansion of the knowledge repository if we consider learning from a long sequence of tasks.

Continual Learning

Dual-Teacher Class-Incremental Learning With Data-Free Generative Replay

no code implementations17 Jun 2021 Yoojin Choi, Mostafa El-Khamy, Jungwon Lee

In the conventional generative replay, the generative model is pre-trained for old data and shared in extra memory for later incremental learning.

Class Incremental Learning Incremental Learning +2

Data-Free Network Quantization With Adversarial Knowledge Distillation

1 code implementation8 May 2020 Yoojin Choi, Jihwan Choi, Mostafa El-Khamy, Jungwon Lee

The synthetic data are generated from a generator, while no data are used in training the generator and in quantization.

Knowledge Distillation Model Compression +1

Wyner VAE: A Variational Autoencoder with Succinct Common Representation Learning

no code implementations25 Sep 2019 J. Jon Ryu, Yoojin Choi, Young-Han Kim, Mostafa El-Khamy, Jungwon Lee

A new variational autoencoder (VAE) model is proposed that learns a succinct common representation of two correlated data variables for conditional and joint generation tasks.

Representation Learning

Variable Rate Deep Image Compression With a Conditional Autoencoder

no code implementations ICCV 2019 Yoojin Choi, Mostafa El-Khamy, Jungwon Lee

Our model also shows comparable and sometimes better performance than the state-of-the-art learned image compression models that deploy multiple networks trained for varying rates.

Image Compression Quantization

Learning with Succinct Common Representation Based on Wyner's Common Information

no code implementations27 May 2019 J. Jon Ryu, Yoojin Choi, Young-Han Kim, Mostafa El-Khamy, Jungwon Lee

A new bimodal generative model is proposed for generating conditional and joint samples, accompanied with a training method with learning a succinct bottleneck representation.

Density Ratio Estimation Image Retrieval +3

Jointly Sparse Convolutional Neural Networks in Dual Spatial-Winograd Domains

no code implementations21 Feb 2019 Yoojin Choi, Mostafa El-Khamy, Jungwon Lee

We consider the optimization of deep convolutional neural networks (CNNs) such that they provide good performance while having reduced complexity if deployed on either conventional systems with spatial-domain convolution or lower-complexity systems designed for Winograd convolution.

Learning Sparse Low-Precision Neural Networks With Learnable Regularization

no code implementations1 Sep 2018 Yoojin Choi, Mostafa El-Khamy, Jungwon Lee

In training low-precision networks, gradient descent in the backward pass is performed with high-precision weights while quantized low-precision weights and activations are used in the forward pass to calculate the loss function for training.

Image Super-Resolution L2 Regularization +1

Compression of Deep Convolutional Neural Networks under Joint Sparsity Constraints

no code implementations21 May 2018 Yoojin Choi, Mostafa El-Khamy, Jungwon Lee

In particular, the proposed framework produces one compressed model whose convolutional filters can be made sparse either in the spatial domain or in the Winograd domain.

Quantization

Universal Deep Neural Network Compression

no code implementations NIPS Workshop CDNNRIA 2018 Yoojin Choi, Mostafa El-Khamy, Jungwon Lee

In this paper, we investigate lossy compression of deep neural networks (DNNs) by weight quantization and lossless source coding for memory-efficient deployment.

Neural Network Compression Quantization

Towards the Limit of Network Quantization

no code implementations5 Dec 2016 Yoojin Choi, Mostafa El-Khamy, Jungwon Lee

Network quantization is one of network compression techniques to reduce the redundancy of deep neural networks.

Clustering Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.