1 code implementation • 22 Feb 2024 • Wonjeong Choi, Jungwuk Park, Dong-Jun Han, YoungHyun Park, Jaekyun Moon
In this paper, we propose consistency-guided temperature scaling (CTS), a new temperature scaling strategy that can significantly enhance the OOD calibration performance by providing mutual supervision among data samples in the source domains.
no code implementations • NeurIPS 2023 • Mohammad Mahdi Rahimi, Hasnain Irshad Bhatti, YoungHyun Park, Humaira Kousar, Jaekyun Moon
This global fitness vector is then disseminated back to the nodes, each of which applies the same update to be synchronized to the global model.
no code implementations • 8 Jun 2023 • Jungwuk Park, Dong-Jun Han, Soyeong Kim, Jaekyun Moon
In domain generalization (DG), the target domain is unknown when the model is being trained, and the trained model should successfully work on an arbitrary (and possibly unseen) target domain during inference.
no code implementations • 16 Dec 2022 • Dong-Jun Han, Do-Yeon Kim, Minseok Choi, Christopher G. Brinton, Jaekyun Moon
A fundamental challenge to providing edge-AI services is the need for a machine learning (ML) model that achieves personalization (i. e., to individual clients) and generalization (i. e., to unseen data) properties concurrently.
no code implementations • 1 Aug 2022 • Hasnain Irshad Bhatti, Jaekyun Moon
Locally supervised learning aims to train a neural network based on a local estimation of the global loss function at each decoupled module of the network.
no code implementations • 14 Feb 2022 • Jun Seo, Young-Hyun Park, Sung Whan Yoon, Jaekyun Moon
The task-conditioned feature transformation allows an effective utilization of the semantic information in novel classes to generate tight segmentation masks.
no code implementations • 7 Jan 2022 • Jy-yong Sohn, Liang Shang, Hongxu Chen, Jaekyun Moon, Dimitris Papailiopoulos, Kangwook Lee
Mixup is a data augmentation method that generates new data points by mixing a pair of input data.
no code implementations • NeurIPS 2021 • Jungwuk Park, Dong-Jun Han, Minseok Choi, Jaekyun Moon
While federated learning (FL) allows efficient model training with local data at edge devices, among major issues still to be resolved are: slow devices known as stragglers and malicious attacks launched by adversaries.
no code implementations • 29 Sep 2021 • Dong-Jun Han, Hasnain Irshad Bhatti, Jungmoon Lee, Jaekyun Moon
Federated learning (FL) operates based on model exchanges between the server and the clients, and suffers from significant communication as well as client-side computation burden.
no code implementations • NeurIPS 2021 • YoungHyun Park, Dong-Jun Han, Do-Yeon Kim, Jun Seo, Jaekyun Moon
Of central issues that may limit a widespread adoption of FL is the significant communication resources required in the exchange of updated model parameters between the server and individual clients over many communication rounds.
no code implementations • 1 Jan 2021 • Dong-Jun Han, Minseok Choi, Jungwuk Park, Jaekyun Moon
Our key idea is to utilize the devices located in the overlapping areas between the coverage of edge servers; in the model-downloading stage, the devices in the overlapping areas receive multiple models from different edge servers, take the average of the received models, and then update the model with their local data.
no code implementations • 1 Jan 2021 • Jungwuk Park, Dong-Jun Han, Minseok Choi, Jaekyun Moon
While federated learning allows efficient model training with local data at edge devices, two major issues that need to be resolved are: slow devices known as stragglers and malicious attacks launched by adversaries.
no code implementations • 10 Dec 2020 • Beongjun Choi, Jy-yong Sohn, Dong-Jun Han, Jaekyun Moon
Through extensive real-world experiments, we demonstrate that our scheme, using only $20 \sim 30\%$ of the resources required in the conventional scheme, maintains virtually the same levels of reliability and data privacy in practical federated learning systems.
no code implementations • 22 Oct 2020 • Jun Seo, Young-Hyun Park, Sung-Whan Yoon, Jaekyun Moon
Few-shot learning allows machines to classify novel classes using only a few labeled samples.
1 code implementation • ICML 2020 • Sung Whan Yoon, Do-Yeon Kim, Jun Seo, Jaekyun Moon
The base and novel classifiers quickly adapt to a given task by utilizing the TAR.
no code implementations • 18 Mar 2020 • Young-Hyun Park, Jun Seo, Jaekyun Moon
Since there is no existing dataset for few-shot semantic edge detection, we construct two new datasets, FSE-1000 and SBD-$5^i$, and evaluate the performance of the proposed CAFENet on them.
no code implementations • 18 Mar 2020 • Jun Seo, Sung Whan Yoon, Jaekyun Moon
Our method employs explicit task-conditioning in which unlabeled sample clustering for the current task takes place in a new projection space different from the embedding feature space.
no code implementations • NeurIPS 2020 • Jy-yong Sohn, Dong-Jun Han, Beongjun Choi, Jaekyun Moon
Recent advances in large-scale distributed learning algorithms have enabled communication-efficient training via SignSGD.
no code implementations • 25 Sep 2019 • Sung Whan Yoon, Jun Seo, Jaekyun Moon
Our method employs explicit task-conditioning in which unlabeled sample clustering for the current task takes place in a new projection space different from the embedding feature space.
1 code implementation • 16 May 2019 • Sung Whan Yoon, Jun Seo, Jaekyun Moon
The training loss is obtained based on a distance metric between the query and the reference vectors in the projection space.
no code implementations • 4 Jun 2018 • Sung Whan Yoon, Jun Seo, Jaekyun Moon
We propose a meta-learning algorithm utilizing a linear transformer that carries out null-space projection of neural network outputs.