Search Results for author: Woojin Lee

Found 6 papers, 2 papers with code

Exploring the Effect of Multi-step Ascent in Sharpness-Aware Minimization

no code implementations27 Jan 2023 Hoki Kim, Jinseong Park, Yujin Choi, Woojin Lee, Jaewook Lee

Recently, Sharpness-Aware Minimization (SAM) has shown state-of-the-art performance by seeking flat minima.

Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples

1 code implementation NeurIPS 2021 Sungyoon Lee, Woojin Lee, Jinseong Park, Jaewook Lee

We identify another key factor that influences the performance of certifiable training: \textit{smoothness of the loss landscape}.

Parameter-free HE-friendly Logistic Regression

no code implementations NeurIPS 2021 Junyoung Byun, Woojin Lee, Jaewook Lee

However, current approaches on the training of encrypted machine learning have relied heavily on hyperparameter selection, which should be avoided owing to the extreme difficulty of conducting validation on encrypted data.

BIG-bench Machine Learning Privacy Preserving +1

Bridged Adversarial Training

no code implementations25 Aug 2021 Hoki Kim, Woojin Lee, Sungyoon Lee, Jaewook Lee

Adversarial robustness is considered as a required property of deep neural networks.

Adversarial Robustness

Loss Landscape Matters: Training Certifiably Robust Models with Favorable Loss Landscape

no code implementations1 Jan 2021 Sungyoon Lee, Woojin Lee, Jinseong Park, Jaewook Lee

Certifiable training minimizes an upper bound on the worst-case loss over the allowed perturbation, and thus the tightness of the upper bound is an important factor in building certifiably robust models.

Understanding Catastrophic Overfitting in Single-step Adversarial Training

1 code implementation5 Oct 2020 Hoki Kim, Woojin Lee, Jaewook Lee

Although fast adversarial training has demonstrated both robustness and efficiency, the problem of "catastrophic overfitting" has been observed.

Cannot find the paper you are looking for? You can Submit a new open access paper.