Search Results for author: Jaekyun Moon

Found 21 papers, 3 papers with code

Consistency-Guided Temperature Scaling Using Style and Content Information for Out-of-Domain Calibration

1 code implementation22 Feb 2024 Wonjeong Choi, Jungwuk Park, Dong-Jun Han, YoungHyun Park, Jaekyun Moon

In this paper, we propose consistency-guided temperature scaling (CTS), a new temperature scaling strategy that can significantly enhance the OOD calibration performance by providing mutual supervision among data samples in the source domains.

Test-Time Style Shifting: Handling Arbitrary Styles in Domain Generalization

no code implementations8 Jun 2023 Jungwuk Park, Dong-Jun Han, Soyeong Kim, Jaekyun Moon

In domain generalization (DG), the target domain is unknown when the model is being trained, and the trained model should successfully work on an arbitrary (and possibly unseen) target domain during inference.

Domain Generalization

SplitGP: Achieving Both Generalization and Personalization in Federated Learning

no code implementations16 Dec 2022 Dong-Jun Han, Do-Yeon Kim, Minseok Choi, Christopher G. Brinton, Jaekyun Moon

A fundamental challenge to providing edge-AI services is the need for a machine learning (ML) model that achieves personalization (i. e., to individual clients) and generalization (i. e., to unseen data) properties concurrently.

Federated Learning

Locally Supervised Learning with Periodic Global Guidance

no code implementations1 Aug 2022 Hasnain Irshad Bhatti, Jaekyun Moon

Locally supervised learning aims to train a neural network based on a local estimation of the global loss function at each decoupled module of the network.

Task-Adaptive Feature Transformer with Semantic Enrichment for Few-Shot Segmentation

no code implementations14 Feb 2022 Jun Seo, Young-Hyun Park, Sung Whan Yoon, Jaekyun Moon

The task-conditioned feature transformation allows an effective utilization of the semantic information in novel classes to generate tight segmentation masks.

Few-Shot Learning Segmentation +1

Sageflow: Robust Federated Learning against Both Stragglers and Adversaries

no code implementations NeurIPS 2021 Jungwuk Park, Dong-Jun Han, Minseok Choi, Jaekyun Moon

While federated learning (FL) allows efficient model training with local data at edge devices, among major issues still to be resolved are: slow devices known as stragglers and malicious attacks launched by adversaries.

Federated Learning

Accelerating Federated Split Learning via Local-Loss-Based Training

no code implementations29 Sep 2021 Dong-Jun Han, Hasnain Irshad Bhatti, Jungmoon Lee, Jaekyun Moon

Federated learning (FL) operates based on model exchanges between the server and the clients, and suffers from significant communication as well as client-side computation burden.

Federated Learning

Few-Round Learning for Federated Learning

no code implementations NeurIPS 2021 YoungHyun Park, Dong-Jun Han, Do-Yeon Kim, Jun Seo, Jaekyun Moon

Of central issues that may limit a widespread adoption of FL is the significant communication resources required in the exchange of updated model parameters between the server and individual clients over many communication rounds.

Federated Learning Few-Shot Learning

FedMes: Speeding Up Federated Learning with Multiple Edge Servers

no code implementations1 Jan 2021 Dong-Jun Han, Minseok Choi, Jungwuk Park, Jaekyun Moon

Our key idea is to utilize the devices located in the overlapping areas between the coverage of edge servers; in the model-downloading stage, the devices in the overlapping areas receive multiple models from different edge servers, take the average of the received models, and then update the model with their local data.

Federated Learning

Sself: Robust Federated Learning against Stragglers and Adversaries

no code implementations1 Jan 2021 Jungwuk Park, Dong-Jun Han, Minseok Choi, Jaekyun Moon

While federated learning allows efficient model training with local data at edge devices, two major issues that need to be resolved are: slow devices known as stragglers and malicious attacks launched by adversaries.

Data Poisoning Federated Learning

Communication-Computation Efficient Secure Aggregation for Federated Learning

no code implementations10 Dec 2020 Beongjun Choi, Jy-yong Sohn, Dong-Jun Han, Jaekyun Moon

Through extensive real-world experiments, we demonstrate that our scheme, using only $20 \sim 30\%$ of the resources required in the conventional scheme, maintains virtually the same levels of reliability and data privacy in practical federated learning systems.

Federated Learning Privacy Preserving

CAFENet: Class-Agnostic Few-Shot Edge Detection Network

no code implementations18 Mar 2020 Young-Hyun Park, Jun Seo, Jaekyun Moon

Since there is no existing dataset for few-shot semantic edge detection, we construct two new datasets, FSE-1000 and SBD-$5^i$, and evaluate the performance of the proposed CAFENet on them.

Edge Detection Few-Shot Learning +2

Task-Adaptive Clustering for Semi-Supervised Few-Shot Classification

no code implementations18 Mar 2020 Jun Seo, Sung Whan Yoon, Jaekyun Moon

Our method employs explicit task-conditioning in which unlabeled sample clustering for the current task takes place in a new projection space different from the embedding feature space.

Classification Clustering +2

Election Coding for Distributed Learning: Protecting SignSGD against Byzantine Attacks

no code implementations NeurIPS 2020 Jy-yong Sohn, Dong-Jun Han, Beongjun Choi, Jaekyun Moon

Recent advances in large-scale distributed learning algorithms have enabled communication-efficient training via SignSGD.

Semi-Supervised Few-Shot Learning with a Controlled Degree of Task-Adaptive Conditioning

no code implementations25 Sep 2019 Sung Whan Yoon, Jun Seo, Jaekyun Moon

Our method employs explicit task-conditioning in which unlabeled sample clustering for the current task takes place in a new projection space different from the embedding feature space.

Clustering Few-Shot Learning

TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning

1 code implementation16 May 2019 Sung Whan Yoon, Jun Seo, Jaekyun Moon

The training loss is obtained based on a distance metric between the query and the reference vectors in the projection space.

Few-Shot Learning

Meta-Learner with Linear Nulling

no code implementations4 Jun 2018 Sung Whan Yoon, Jun Seo, Jaekyun Moon

We propose a meta-learning algorithm utilizing a linear transformer that carries out null-space projection of neural network outputs.

Classification Few-Shot Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.