no code implementations • 27 Jan 2023 • Hoki Kim, Jinseong Park, Yujin Choi, Woojin Lee, Jaewook Lee
Recently, Sharpness-Aware Minimization (SAM) has shown state-of-the-art performance by seeking flat minima.
1 code implementation • NeurIPS 2021 • Sungyoon Lee, Woojin Lee, Jinseong Park, Jaewook Lee
We identify another key factor that influences the performance of certifiable training: \textit{smoothness of the loss landscape}.
no code implementations • NeurIPS 2021 • Junyoung Byun, Woojin Lee, Jaewook Lee
However, current approaches on the training of encrypted machine learning have relied heavily on hyperparameter selection, which should be avoided owing to the extreme difficulty of conducting validation on encrypted data.
no code implementations • 25 Aug 2021 • Hoki Kim, Woojin Lee, Sungyoon Lee, Jaewook Lee
Adversarial robustness is considered as a required property of deep neural networks.
no code implementations • 1 Jan 2021 • Sungyoon Lee, Woojin Lee, Jinseong Park, Jaewook Lee
Certifiable training minimizes an upper bound on the worst-case loss over the allowed perturbation, and thus the tightness of the upper bound is an important factor in building certifiably robust models.
1 code implementation • 5 Oct 2020 • Hoki Kim, Woojin Lee, Jaewook Lee
Although fast adversarial training has demonstrated both robustness and efficiency, the problem of "catastrophic overfitting" has been observed.