no code implementations • 22 May 2023 • Sungyoon Lee, Sokbae Lee
In recent years, there has been a significant growth in research focusing on minimum $\ell_2$ norm (ridgeless) interpolation least squares estimators.
1 code implementation • NeurIPS 2021 • Sungyoon Lee, Woojin Lee, Jinseong Park, Jaewook Lee
We identify another key factor that influences the performance of certifiable training: \textit{smoothness of the loss landscape}.
no code implementations • 29 Sep 2021 • Sungyoon Lee, Jinseong Park, Jaewook Lee
The eigendecomposition provides a simple relation between the eigenvalues of the low-dimensional matrix and the impurity of the probability output.
no code implementations • 25 Aug 2021 • Hoki Kim, Woojin Lee, Sungyoon Lee, Jaewook Lee
Adversarial robustness is considered as a required property of deep neural networks.
no code implementations • 6 Jul 2021 • Sungyoon Lee, Hoki Kim, Jaewook Lee
Our experiments on MNIST, CIFAR10, and STL10 show that our proposed GradDiv regularizations improve the adversarial robustness of randomized neural networks against a variety of state-of-the-art attack methods.
no code implementations • 1 Jan 2021 • Sungyoon Lee, Woojin Lee, Jinseong Park, Jaewook Lee
Certifiable training minimizes an upper bound on the worst-case loss over the allowed perturbation, and thus the tightness of the upper bound is an important factor in building certifiably robust models.
1 code implementation • NeurIPS 2020 • Sungyoon Lee, Jaewook Lee, Saerom Park
Our certifiable training algorithm provides a tight propagated outer bound by introducing the box constraint propagation (BCP), and it efficiently computes the worst logit over the outer bound.