no code implementations • 25 Apr 2024 • Jaeseong You, Minseop Park, Kyunggeun Lee, Seokjun An, Chirag Patel, Markus Nage
This paper investigates three different parameterizations of asymmetric uniform quantization for quantization-aware training: (1) scale and offset, (2) minimum and maximum, and (3) beta and gamma.
no code implementations • 30 Nov 2022 • Minseop Park, Jaeseong You, Markus Nagel, Simyung Chang
In that case, it is observed that quantization-aware training overfits the model to the fine-tuning data.
1 code implementation • ICLR 2020 • Hae Beom Lee, Hayeon Lee, Donghyun Na, Saehoon Kim, Minseop Park, Eunho Yang, Sung Ju Hwang
While tasks could come with varying the number of instances and classes in realistic settings, the existing meta-learning approaches for few-shot classification assume that the number of instances per task and class is fixed.
no code implementations • 11 Apr 2019 • Minseop Park, Jungtaek Kim, Saehoon Kim, Yanbin Liu, Seungjin Choi
A meta-model is trained on a distribution of similar tasks such that it learns an algorithm that can quickly adapt to a novel task with only a handful of labeled examples.
2 code implementations • ICLR 2019 • Yanbin Liu, Juho Lee, Minseop Park, Saehoon Kim, Eunho Yang, Sung Ju Hwang, Yi Yang
The goal of few-shot learning is to learn a classifier that generalizes well even when trained with a limited number of training instances per class.