no code implementations • 22 Nov 2023 • Seongyoon Kim, Gihun Lee, Jaehoon Oh, Se-Young Yun
Additionally, we observe that as data heterogeneity increases, the gap between higher feature norms for observed classes, obtained from local models, and feature norms of unobserved classes widens, in contrast to the behavior of classifier weight norms.
no code implementations • 29 Aug 2023 • Seongha Eom, Namgyu Ho, Jaehoon Oh, Se-Young Yun
Given a query image, we harness the power of CLIP's cross-modal representations to retrieve relevant textual information from an external image-text pair dataset.
no code implementations • 24 Aug 2023 • Gihun Lee, Minchan Jeong, Sangmook Kim, Jaehoon Oh, Se-Young Yun
FedSOL is designed to identify gradients of local objectives that are inherently orthogonal to directions affecting the proximal objective.
1 code implementation • 18 Oct 2022 • Jaehoon Oh, Jongwoo Ko, Se-Young Yun
Translation has played a crucial role in improving the performance on multilingual tasks: (1) to generate the target language data from the source language data for training and (2) to generate the source language data from the target language data for inference.
no code implementations • 18 Jun 2022 • Jaehoon Oh, Se-Young Yun
Few-shot class-incremental learning (FSCIL) has addressed challenging real-world scenarios where unseen novel classes continually arrive with few samples.
no code implementations • 13 May 2022 • Yujin Kim, Jaehoon Oh, Sungnyun Kim, Se-Young Yun
Next, we show that data augmentation cannot guarantee few-shot performance improvement and investigate the effectiveness of data augmentation based on the intensity of augmentation.
no code implementations • 11 May 2022 • Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun
Cross-domain few-shot learning (CD-FSL), where there are few target samples under extreme differences between source and target domains, has recently attracted huge attention.
2 code implementations • 1 Feb 2022 • Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun
This data enables self-supervised pre-training on the target domain, in addition to supervised pre-training on the source domain.
1 code implementation • ICLR 2022 • Jaehoon Oh, Sangmook Kim, Se-Young Yun
Based on this observation, we propose a novel federated learning algorithm, coined FedBABU, which only updates the body of the model during federated training (i. e., the head is randomly initialized and never updated), and the head is fine-tuned for personalization during the evaluation process.
3 code implementations • 4 Jun 2021 • Jaehoon Oh, Sangmook Kim, Se-Young Yun
Based on this observation, we propose a novel federated learning algorithm, coined FedBABU, which only updates the body of the model during federated training (i. e., the head is randomly initialized and never updated), and the head is fine-tuned for personalization during the evaluation process.
1 code implementation • 19 May 2021 • Taehyeon Kim, Jaehoon Oh, Nakyil Kim, Sangwook Cho, Se-Young Yun
From this observation, we consider an intuitive KD loss function, the mean squared error (MSE) between the logit vectors, so that the student model can directly learn the logit of the teacher model.
no code implementations • 1 Jan 2021 • Taehyeon Kim, Jaehoon Oh, Nakyil Kim, Sangwook Cho, Se-Young Yun
To verify this conjecture, we test an extreme logit learning model, where the KD is implemented with Mean Squared Error (MSE) between the student's logit and the teacher's logit.
no code implementations • 9 Dec 2020 • Jin-woo Lee, Jaehoon Oh, Yooju Shin, Jae-Gil Lee, Se-Young Yoon
Federated learning has emerged as a new paradigm of collaborative machine learning; however, it has also faced several challenges such as non-independent and identically distributed(IID) data and high communication cost.
no code implementations • 6 Dec 2020 • Jin-woo Lee, Jaehoon Oh, Sungsu Lim, Se-Young Yun, Jae-Gil Lee
Federated learning has emerged as a new paradigm of collaborative machine learning; however, many prior studies have used global aggregation along a star topology without much consideration of the communication scalability or the diurnal property relied on clients' local time variety.
1 code implementation • ICLR 2021 • Jaehoon Oh, Hyungjun Yoo, ChangHwan Kim, Se-Young Yun
It has recently been hypothesized that representation reuse, which makes little change in efficient representations, is the dominant factor in the performance of the meta-initialized model through MAML in contrast to representation change, which causes a significant change in representations.
1 code implementation • 24 Apr 2020 • Gihun Lee, Sangmin Bae, Jaehoon Oh, Se-Young Yun
With the success of deep learning in various fields and the advent of numerous Internet of Things (IoT) devices, it is essential to lighten models suitable for low-power devices.
no code implementations • 26 Oct 2018 • Jaehoon Oh, Duyeon Kim, Se-Young Yun
The proposed model can be used for not only singing voice separation but also multi-instrument separation by changing only the number of output channels.